Can robots inherit human bias? Yes. Now, the harm has a face.
[ad_1]
This “invisible” AI, however, can make itself visible in some unintended and occasionally upsetting ways. In 2018, Amazon scrapped some of its AI recruiting software because it demonstrated a bias against women. As reported by Reuters, Amazon’s own machine learning specialists realized that their algorithm’s training data had been culled from patterns in resumes submitted over 10 years when males dominated the software industry.
ProPublica found problems with a risk-assessment tool that is widely used in the criminal justice system. The machine is designed to predict recidivism (relapse into criminal behavior) in the prison population. Risk estimates incorrectly designated African American defendants as more likely to commit future crimes than Caucasian defendants.
These unintended consequences were less of a problem in the past, because every piece of software logic was explicitly hand-coded, reviewed and tested. AI algorithms, on the other hand, learn from existing examples without relying on explicit rules-based programming. This is a useful approach where there is sufficient and accurately representative data available and when it may be difficult or costly to model the rules by hand — for example, being able to distinguish between a cat or a dog in an image. But, depending on a variety of circumstances, this methodology can lead to problems.
There is growing concern that sometimes AI generates distorted views of subjects, leading to bad decisions. For us to effectively shape the future of technology, we need to study the anthropology of it and understand it.
The concept of distorted data can be too abstract to grasp, making it difficult to identify. After the congressional hearings on Facebook, I felt that there needs to be better awareness of these concepts in the general public.
Art can help with creating this awareness. In a photography project called “Human Trials,” I created an artistic representation of this distortion based on possible portraits of people who do not exist, created using AI algorithms.
Stick with me as I explain how I made the portraits.
The process used two AI algorithms. The first was trained to look at images of people and tell them apart from other images. The second generates images to try to fool the first algorithm into thinking that its generated image belongs to a group of real people I photographed in my studio. This process iterates and the second algorithm continues to improve until it consistently fools the first algorithm.
The website thispersondoesnotexist.com used this type of algorithm to create stunningly realistic images of people who, as the name of the website makes clear, don’t exist. What I did differently was to photograph my original, real subjects using a technique called “light painting.” During a 20-minute exposure, I used a flashlight to illuminate each person’s face unevenly while the subjects were moving, creating images of the subjects with parts of their faces distorted or missing. The images created by the algorithm are, in turn, distorted. If you were creating a representation of a human and you didn’t have all the information to put it together, you would end up with distorted ones such as these.
When a mortgage company, a recruiting service or a crime-prediction software develops a distorted version of people, it is an invisible kind of harm. These photographs make the pain visible by applying the process to a human face.
What can we do to prevent bias in AI, and the harm it causes?
One important aspect of good data is that it needs to have breadth and depth. For example, looking at data on a large number of customers, and deeper data on each customer. This enables models to handle situations better and more predictably and help reduce bias; in fact, it was this lack of breadth in data that Amazon had to deal with in its recruiting software. AI researchers are defining more ways to improve fairness for groups and individuals.
Some solutions that seem promising, though, don’t actually work. Turns out, removing protected attributes, such as gender, race, religion or disability, from the training data before modeling does nothing to address bias and even possibly concedes it. That is because this “fairness through unawareness,” as it has been called, ignores redundant encodings — ways of inferring a protected attribute, such as race or ethnicity, from unprotected features, such as, say, a ZIP code in a highly segregated city or an Hispanic surname. To address this, we remove the attributes that are highly correlated with the protected attribute. The algorithm can also be checked for false positives and false negatives early on.
In 2014, Stephen Hawking speculated aloud on a future many Hollywood films have depicted: “The development of full artificial intelligence could spell the end of the human race.”
This disturbing quote is often thought to refer to phenomena such as self-conscious, AI-enabled robots that could eventually take over the world. While the AI in current use is far too narrow to bring about the end of humans, it has already created disturbing problems.
What many are not aware of is what Hawking said next: “I am an optimist, and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”
Making AI fair is not just a nice idea, but a social imperative. If we study and question the technology from many angles, AI has the potential to improve the quality of life of everyone on the planet, raising our earning potentials, and helping us to live longer and healthier lives.
Rashed Haq (@rashehaq) is an artist and AI and robotics engineer. His latest book is “Enterprise Artificial Intelligence Transformation.” His series “Human Trials” won the Lenscratch Art+Science award for 2021.
[ad_2]
Source link