{"id":27553,"date":"2022-01-09T01:34:05","date_gmt":"2022-01-09T01:34:05","guid":{"rendered":"https:\/\/cjstudents.com\/?p=27553"},"modified":"2022-01-09T01:34:05","modified_gmt":"2022-01-09T01:34:05","slug":"can-robots-inherit-human-bias-yes-now-the-harm-has-a-face","status":"publish","type":"post","link":"https:\/\/cjstudents.com\/index.php\/2022\/01\/09\/can-robots-inherit-human-bias-yes-now-the-harm-has-a-face\/","title":{"rendered":"Can robots inherit human bias? Yes. Now, the harm has a face."},"content":{"rendered":"<p> [ad_1]<\/p>\n<p>\n                  <drop_initial class=\"macro\" displayname=\"drop_initial\" name=\"drop_initial\"\/>People may not notice artificial intelligence in their day-to-day lives, but it is there. AI is now used to review applications for mortgages and sort through resumes to find a small pool of appropriate candidates before job interviews are scheduled. AI systems curate content for every individual on Facebook. Phone calls to the customer-service departments of cable providers, utility companies and banks, among other institutions, are answered by voice recognition systems based on AI.<\/p>\n<p>This \u201cinvisible\u201d AI, however, can make itself visible in some unintended and occasionally upsetting ways. In 2018, Amazon scrapped some of its AI recruiting software because it demonstrated a bias against women. As <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G\" rel=\"noopener\">reported<\/a> by Reuters, Amazon\u2019s own machine learning specialists realized that their algorithm\u2019s training data had been culled from patterns in resumes submitted over 10 years when males  dominated the software industry.<\/p>\n<div id=\"paywall\">\n<p>ProPublica <a target=\"_blank\" href=\"https:\/\/www.propublica.org\/article\/how-we-analyzed-the-compas-recidivism-algorithm\" rel=\"noopener\">found problems<\/a> with a risk-assessment tool that is widely used in the criminal justice system. The machine is designed to predict recidivism (relapse into criminal behavior) in the prison population. Risk estimates incorrectly designated African American defendants as more likely to commit future crimes than Caucasian defendants.<\/p>\n<p>These unintended consequences were less of a problem in the past, because every piece of software logic was explicitly hand-coded, reviewed and tested. AI algorithms, on the other hand, learn from existing examples without relying on explicit rules-based programming. This is a useful approach where there is sufficient and accurately representative data available and when it may be difficult or costly to model the rules by hand \u2014 for example, being able to distinguish between a cat or a dog in an image. But, depending on a variety of circumstances, this methodology can lead to problems.<\/p>\n<p>    <!-- Missed: ad --><\/p>\n<p>There is growing concern that sometimes AI generates distorted views of subjects, leading to bad decisions. For us to effectively shape the future of technology, we need to study the anthropology of it and understand it.<\/p>\n<p>The concept of distorted data can be too abstract to grasp, making it difficult to identify. After the congressional hearings on Facebook, I felt that there needs to be better awareness of these concepts in the general public.<\/p>\n<p>Art can help with creating this awareness. In a photography project called \u201cHuman Trials,\u201d I created an artistic representation of this distortion based on possible portraits of people who do not exist, created using AI algorithms.<\/p>\n<p>Stick with me as I explain how I made the portraits.<\/p>\n<p>    <!-- hearst\/article\/content\/relatedStories.tpl --><\/p>\n<section class=\"relatedStories\" data-progressive=\"true\"\/><!-- e hearst\/article\/content\/relatedStories.tpl --><\/p>\n<p>The process used two AI algorithms. The first was trained to look at images of people and tell them apart from other images. The second generates images to try to fool the first algorithm into thinking that its generated image belongs to a group of real people I photographed in my studio. This process iterates and the second algorithm continues to improve until it consistently fools the first algorithm.<\/p>\n<p>The website <a target=\"_blank\" href=\"https:\/\/urldefense.com\/v3\/__http:\/thispersondoesnotexist.com__;!!Ivohdkk!2gnz-_7VJ9Q0YvDZHlcmUjrt2kFd38PLi8PDlEnBe7nWZulCN-ag5YTJKS3T6KQ%24\" rel=\"noopener\">thispersondoesnotexist.com<\/a> used this type of algorithm to create stunningly realistic images of people who, as the name of the website makes clear, don\u2019t exist. What I did differently was to photograph my original, real subjects using a technique called \u201clight painting.\u201d During a 20-minute exposure, I used a flashlight to illuminate each person\u2019s face unevenly while the subjects were moving, creating images of the subjects with parts of their faces distorted or missing. The images created by the algorithm are, in turn, distorted. If you were creating a representation of a human and you didn\u2019t have all the information to put it together, you would end up with distorted ones such as these.<\/p>\n<p>    <!-- Missed: ad --><\/p>\n<p>When a mortgage company, a recruiting service or a crime-prediction software develops a distorted version of people, it is an invisible kind of harm. These photographs make the pain visible by applying the process to a human face.<\/p>\n<p>What can we do to prevent bias in AI, and the harm it causes?<\/p>\n<p>One important aspect of good data is that it needs to have breadth and depth. For example, looking at data on a large number of customers, and deeper data on each customer. This enables models to handle situations better and more predictably and help reduce bias; in fact, it was this lack of breadth in data that Amazon had to deal with in its recruiting software. AI researchers are defining more ways to improve fairness for groups and individuals.<\/p>\n<aside class=\"article--content-inline\">\n<aside class=\"zone\"><!-- src\/business\/widgets\/hearst\/collection\/widget.tpl --><!-- e src\/business\/widgets\/hearst\/collection\/widget.tpl --><\/aside>\n<\/aside>\n<p>Some solutions that seem promising, though, don\u2019t actually work. Turns out, removing protected attributes, such as gender, race, religion or disability, from the training data before modeling does nothing to address bias and even possibly concedes it. That is because this \u201cfairness through unawareness,\u201d as it has been called, ignores redundant encodings \u2014 ways of inferring a protected attribute, such as race or ethnicity, from unprotected features, such as, say, a ZIP  code in a highly segregated city or an Hispanic surname. To address this, we remove the attributes that are highly correlated with the protected attribute. The algorithm can also be checked for false positives and false negatives early on.<\/p>\n<p>    <!-- Missed: ad --><\/p>\n<p>In 2014, Stephen Hawking speculated aloud on a future many Hollywood films have depicted: \u201cThe development of full artificial intelligence could spell the end of the human race.\u201d<\/p>\n<p>This disturbing quote is often thought to refer to phenomena such as self-conscious, AI-enabled robots that could eventually take over the world. While the AI in current use is far too narrow to bring about the end of humans, it has already created disturbing problems.<\/p>\n<p>What many are not aware of is what Hawking said next: \u201cI am an optimist, and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.\u201d<\/p>\n<p>Making AI fair is not just a nice idea, but a social imperative. If we study and question the technology from many angles, AI has the potential to improve the quality of life of everyone on the planet, raising our earning potentials, and helping us to live longer and healthier lives.<\/p>\n<p>            <em><\/p>\n<p>\n                    <a target=\"_blank\" href=\"https:\/\/urldefense.com\/v3\/__https:\/www.rashedhaq.com__;!!Ivohdkk!2gnz-_7VJ9Q0YvDZHlcmUjrt2kFd38PLi8PDlEnBe7nWZulCN-ag5YTJAuit_4s%24\" rel=\"noopener\">Rashed Haq<\/a> (<a target=\"_blank\" href=\"https:\/\/urldefense.com\/v3\/__https:\/twitter.com\/rashedhaq\/__;!!Ivohdkk!2gnz-_7VJ9Q0YvDZHlcmUjrt2kFd38PLi8PDlEnBe7nWZulCN-ag5YTJKNaWQsw%24\" rel=\"noopener\">@rashehaq<\/a>) is an artist and AI and robotics engineer. His latest book is \u201c<a target=\"_blank\" href=\"https:\/\/urldefense.com\/v3\/__https:\/www.amazon.com\/Enterprise-Artificial-Intelligence-Transformation-Rashed\/dp\/1119665930\/ref=as_li_ss_tl?dchild=1&amp;amp;keywords=rashed*20haq&amp;amp;language=en_US&amp;amp;linkCode=sl1&amp;amp;linkId=3cca6c04578493db94835fa8aecbc653&amp;amp;qid=1586621367&amp;amp;sr=8-1&amp;amp;tag=zerpoiene-20__;JQ!!Ivohdkk!2gnz-_7VJ9Q0YvDZHlcmUjrt2kFd38PLi8PDlEnBe7nWZulCN-ag5YTJNSQht80%24\" rel=\"noopener\">Enterprise Artificial Intelligence Transformation<\/a>.\u201d His series \u201cHuman Trials\u201d won the Lenscratch Art+Science award for 2021.<\/p>\n<p>                <\/em><\/p>\n<section id=\"articleBottom\" class=\"article--content-zone bottom\"\/><\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<br \/>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/www.houstonchronicle.com\/opinion\/editorials\/article\/Essay-Can-robots-inherit-human-bias-Yes-Now-16759778.php\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] People may not notice artificial intelligence in their day-to-day lives, but it is there&#8230;.<\/p>\n","protected":false},"author":1,"featured_media":27554,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[23],"tags":[],"class_list":["post-27553","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-learningtheory"],"_links":{"self":[{"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/posts\/27553","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/comments?post=27553"}],"version-history":[{"count":1,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/posts\/27553\/revisions"}],"predecessor-version":[{"id":27555,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/posts\/27553\/revisions\/27555"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/media\/27554"}],"wp:attachment":[{"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/media?parent=27553"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/categories?post=27553"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cjstudents.com\/index.php\/wp-json\/wp\/v2\/tags?post=27553"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}