We have been saying a lot about AI (artificial intelligence) these days. It’s no wonder that Google’s latest smartphone announcement, the Pixel 4, with AI-enhanced recording and transcribing app received so much attention.

As we march toward a world enhanced by AI and automation, we need to make sure we’re not marching forward blindly. We need to think about what problems these technologies may create as they revolutionize industries.

And, while we won’t be able to anticipate all problems or even come up with answers to all of the problems, we can anticipate, we can start talking about AI from every angle to help prevent us from being blindsided. One discussion we need to keep having is AI and ethics and more specifically, bias in AI.

There is no question that artificial intelligence is fundamentally changing our society. While most you reading this blog already might know how great AI is, and how great it can be, perhaps we also need to shed a little caution and check our enthusiasm and talk about the ethical concerns that it raises.

If we begin to look at AI with a broader lens it opens the door to many questions about the use of AI and ethics. Some of the things we need to be thinking about in designing AI systems is transparency, fairness, accountability, safety, and bias.

We know that when looking at AI there are many tradeoffs that have to be taken into consideration during the development in the way the algorithms behind these automated systems operate and make decisions.

These tradeoffs explain how bias ends up in AI. Humans are the ones building the algorithms and training the AI. Humans must teach the AI how to make decisions, and they’re going to do so based on their own personal worldviews.

If we accept this thinking then it’s just inevitable and inescapable.

But on further examination, MIT makes the case for bias entering the AI equation before the algorithms even exist—as far back as before the data that will inform those algorithms are even created.

Consider how people go about creating deep-learning models in the first place. Say their goal is to make a good business decision. The AI is going to end up prioritizing whatever makes the most business sense over other factors like fairness or discrimination.

Another sticky point to consider is this: Do we know exactly what “fairness” means in the context of machine-learning outcomes?

A research paper argued the point to the contrary and that we can’t really do that. And yet, even as I write this, do we have an obligation to try?

We have to keep working at this. The stakes are too high. AI is already being used or will soon be used to make decisions regarding loans, insurance rates, college admissions, job placement and candidacy, and so on. Bias is not just a problem when AI is making big decisions, either.

How about when voice assistants can’t understand a person’s dialect? Is that fair?

When there’s a racial imbalance in the teams creating AI solutions like voice assistants, it can mean those AI systems have a hard time understanding people who speak differently than the people who are predominantly training these algorithms.

AI gets a lot right, though. And we need to remember that even a biased AI system has the potential to be less biased than a human, who doesn’t necessarily try to be unfair, but makes decisions based on factors that even that person can’t always understand or explain.

The scary thing about bias in AI versus bias in humans is the potential for AI to scale—and the bias along with it.

As McKinsey&Co. puts it: “AI can help reduce bias, but it can also bake in and scale bias.”

The Algorithmic Justice League is a collective that aims to highlight algorithmic bias and increase awareness about it. It provides a space for people to voice their experiences with and concerns about bias in AI, and it helps develop practices for accountability during the design, development, and deployment of coded systems.

On its website, users can request a bias check to get help testing their tech with diverse users. They can report bias in someone else’s tech. And it offers a lot of resources that help raise awareness about bias in AI through media, art, and science outlets.

The future will be riddled with biased AI. The only question left to ask is how will companies address the fairness and biases we see to make it work? Or will they even care enough to address the issues that are going to explode in the years ahead?

Want to tweet about this article? Use hashtags #M2M #IoT #AI #artificialintelligence #machinelearning #bigdata #PeggySmedley #digitaltransformation #cybersecurity #blockchain #manufacturing #5G #bigdata #cloud