Search
Close this search box.

Bias in Artificial Intelligence

Real-world applications of Artificial Intelligence (AI) have skyrocketed in recent years. Today, nearly every tech titan (e.g., IBM, SAP, Salesforce, Microsoft, Amazon, etc.) offers some sort of enterprise AI solution making it easier than ever for industries of all kinds to embrace the technology. With all of the recent hype surrounding AI and Machine Learning (ML), enterprises may feel compelled, or even pressured, to adopt AI technology in order to stay competitive. But despite its seemingly endless power and potential, AI does not come without flaws.

The Problem

AI runs on algorithms developers build into it and since humans are inherently biased, their biases are inevitably built into the AI technology. It’s difficult to identify our own biases and, therefore, extremely difficult to identify and prevent biases in AI technology. Further, since AI is built to evolve and learn based on the data it collects, this problem is compounded as time passes.

Do we really understand the numerous sources and negative impact of bias in AI?  In addition to unintentional bias of development engineers, do we account for systemic bias in research data collection or analysis?  If we use medical research as an example, we quickly understand the importance of preventing bias in medical research.  Physicians depend on medical research for treatment decisions every day, therefore patient treatment outcomes depend on good research data.  If bias is a known and guarded concern in medical research, why is it suddenly a new concern in AI and machine learning?

As Cathy O’Neil summarized in her 2016 book Weapons of Math Destruction, “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.”

Yes, fairness should be put ahead of profit, but it seems that most big cloud vendors have rushed to release their AI solutions as quickly and competitively as possible. They are in an AI battle, competing for sales and early-adopters regardless of how deeply they tested for bias in the R&D phases. So, what are these big cloud vendors doing to address the problem?

Vendors Address Fairness in AI

In the past few months, many cloud vendors made announcements about how they will try to address the issue moving forward. In September, Google announced the What-if tool for developers which includes fairness assessments, IBM launched visual developer tools and open sourced its ML fairness tools, and SAP announced a set of ethical guiding principles for AI development in addition to creating an external advisory panel on AI ethics. Microsoft has created a new position to help companies address a range of potential problems such as bias and transparency. These are just a few examples; many other vendors are open sourcing their tools or starting to address these issues in some way.

While these recent efforts are certainly a step in the right direction, we have to wonder, is this too little too late? Should vendors have been more transparent about their technologies and the data feeding their AI systems during discussions with potential customers or even prior to launching?

AI Customers Are Responsible Too

Though algorithmic bias in AI has long been a concern, people and businesses are trusting AI more readily than ever. Companies and even government bodies such as law enforcement are increasingly adopting and deploying the technology – often without fully considering the potential biases that may develop or already be present.

There are countless ways AI is being used today; enterprises use AI to generate analytics that drive business decisions, HR departments use it to help with hiring activities, and companies like Apple and Amazon use AI voice assistants like Siri and Alexa to interact with consumers. The severity of the potential consequences AI could have depends on the application but there is no denying that blindly trusting AI technology could lead to mistakes and negative consequences on individuals or even society as a whole.

One quote related to this that comes to mind comes from an article I recently read by mathematician, Hannah Fry, who stated “The inherent problems of algorithms are magnified when they are paired with humans and our ready acceptance of artificial authority. But maybe that’s precisely the point. Perhaps thinking of algorithms as some kind of authority is where we went wrong.”

Stay Critical of AI’s Accuracy and Potential Risks  

Overall, AI is capable of some amazing things and it’s great to see AI vendors starting to address the issue of bias head-on. However, AI isn’t perfect and bias will likely always be a problem that needs to continuously be monitored and improved upon. It’s not just the creators who need to be aware of these dangers, but customers and users should be conscious of potential issues as well.

Enterprises who are considering AI solutions should seek transparency into how the AI was developed and what the vendor is doing to combat potential biases. Acknowledging these concerns and other potential risks can also be used to help you negotiate winning AI deals or lower the risk of potential negative outcomes.

Comment below, follow me, Ted Rogers, on Twitter @UpperEdgeTed find my other UpperEdge blogs and follow UpperEdge on Twitter and LinkedIn.

Related Posts

Related Blogs