December 23, 2024

cjstudents

News for criminal justice students

Can robots inherit human bias? Yes. Now, the harm has a face.

[ad_1]

People may not notice artificial intelligence in their day-to-day lives, but it is there. AI is now used to review applications for mortgages and sort through resumes to find a small pool of appropriate candidates before job interviews are scheduled. AI systems curate content for every individual on Facebook. Phone calls to the customer-service departments of cable providers, utility companies and banks, among other institutions, are answered by voice recognition systems based on AI.

This “invisible” AI, however, can make itself visible in some unintended and occasionally upsetting ways. In 2018, Amazon scrapped some of its AI recruiting software because it demonstrated a bias against women. As reported by Reuters, Amazon’s own machine learning specialists realized that their algorithm’s training data had been culled from patterns in resumes submitted over 10 years when males dominated the software industry.

ProPublica found problems with a risk-assessment tool that is widely used in the criminal justice system. The machine is designed to predict recidivism (relapse into criminal behavior) in the prison population. Risk estimates incorrectly designated African American defendants as more likely to commit future crimes than Caucasian defendants.

These unintended consequences were less of a problem in the past, because every piece of software logic was explicitly hand-coded, reviewed and tested. AI algorithms, on the other hand, learn from existing examples without relying on explicit rules-based programming. This is a useful approach where there is sufficient and accurately representative data available and when it may be difficult or costly to model the rules by hand — for example, being able to distinguish between a cat or a dog in an image. But, depending on a variety of circumstances, this methodology can lead to problems.

There is growing concern that sometimes AI generates distorted views of subjects, leading to bad decisions. For us to effectively shape the future of technology, we need to study the anthropology of it and understand it.

The concept of distorted data can be too abstract to grasp, making it difficult to identify. After the congressional hearings on Facebook, I felt that there needs to be better awareness of these concepts in the general public.

Art can help with creating this awareness. In a photography project called “Human Trials,” I created an artistic representation of this distortion based on possible portraits of people who do not exist, created using AI algorithms.

Stick with me as I explain how I made the portraits.



[ad_2]

Source link