October 18, 2024

cjstudents

News for criminal justice students

Understanding Bias in Artificial Intelligence Models and Ways to Mitigate

[ad_1]

Artificial intelligence’s increased use in sensitive sectors such as recruiting, criminal justice, and healthcare has sparked a debate regarding bias and fairness. However, human decision-making in these and other fields can be flawed due to unconscious individual and social biases. How can these biases be mitigated?

Understanding AI in greater depth

Artificial intelligence (AI) aims to make computers sophisticated enough to emulate human cognitive functions. As a result, AI is a vast field encompassing computer vision, language processing, creativity, and summarization. Machine learning is the discipline of AI that deals with the statistical aspects of AI. It trains the computer to solve problems by studying hundreds or thousands of instances, learning from them, and applying it to new circumstances. Deep learning is a subset of Machine Learning which allows computers to learn and make decisions independently. Compared to most machine learning methods, deep learning includes a higher level of automation.

Example of biases in AI models

For example, with a self-driving automobile, several ethical concerns or issues may arise, such as whether the car should choose between colliding with a sign and injuring passengers or colliding with pedestrians on the side of the road and maybe saving passengers. How does one arrive at those conclusions? It’s an intriguing topic with many unanswered concerns, and it’s uncertain how to properly create standards such that different car manufacturers behave in a uniform, ethical, and predictable manner.

Consider owning a computer vision company that develops AI classification models for professionals in the healthcare industry who use MRIs, CT scans, and X-rays to identify cancer. When a person’s life is at risk, a doctor can find it challenging to trust an AI model’s diagnosis.

The general source of biases

The problem is often always caused by the underlying data rather than the method itself. Data incorporating human judgments or data reflecting second-order consequences of societal or historical injustices can be used to train models. Bias can also be introduced into data by how it is collected or chosen for use. User-generated data can also create a bias-inducing feedback loop.

AI can help in the reduction of bias but can also introduce and scale

Because machine learning algorithms learn to evaluate only the variables that increase their predictive accuracy based on the training data used, AI can reduce humans’ subjective interpretation of data in many circumstances. Algorithms, for example, have been found to assist in eliminating racial inequities in the criminal justice system by Jon Kleinberg and others. Simultaneously, big data demonstrates that AI models can embed and deploy human and social biases at scale. It will almost certainly never be able to develop a single, universal definition of fairness or a metric to quantify it. Instead, multiple measurements and standards will most likely be required depending on the use case and conditions.

Mitigation Methods

There have been several techniques to enforce fairness restrictions on AI models. The first involves pre-processing the data to ensure accuracy as feasible while eliminating any association between outcomes and protected features or creating data representations that do not include sensitive attribute information. This group includes “counterfactual fairness” techniques, which are founded on the premise that when a sensitive attribute is modified in a counterfactual world, a decision should remain the same. Post-processing techniques are the second approach. To satisfy a fairness constraint, these alter some of the model’s predictions after they are made. The third method either uses an adversary to reduce the system’s capacity to forecast the sensitive feature or imposes fairness limitations on the optimization process itself. 

Be aware of the situations in which AI can aid in the correction of bias and the conditions in which AI has a significant danger of increasing bias. When using AI, it’s crucial to consider areas prone to unfair bias, such as those with a history of bias systems or skewed data. The following methods can be kept in mind to reduce bias in AI systems-

  • Establish procedures for detecting and mitigating bias in AI systems. Improved data collection through more purposeful sampling and the use of internal “red teams” or third parties to audit data and models are examples of operational techniques. Finally, clarity about processes and metrics can aid observers in comprehending the actions taken to ensure fairness, as well as any trade-offs that may be involved.
  • Engage in fact-based discussions regarding human decision-making biases. Organizations should explore how human-driven processes can be improved in the future if models trained on current human decisions or behavior demonstrate bias. Leaders might assess if the proxies employed in the past were acceptable and how AI can help by uncovering long-standing biases that may have gone unreported, as AI reveals more about human decision-making.
  • More money should be spent on bias research, more data should be made available for analysis (while maintaining privacy), and a multidisciplinary approach should be taken. Interdisciplinary participation, including ethicists, social scientists, and experts who best understand the complexities of each application area in the process, will be required to make more progress. As the area evolves and practical expertise in real-world applications grows, a fundamental aspect of the interdisciplinary approach will be to assess and evaluate the role of AI decision-making regularly.
  • Investment should be made into diversifying the AI area. A more diverse AI community will be better able to foresee, recognize, and investigate instances of unfair bias and engage populations who are likely to be affected by bias.

References:

[ad_2]

Source link