I can’t count how many times I have written about AI (artificial intelligence). But for this columns I will address it again in part, because you will be hearing about it for many years to come. The fact is there are many reasons why this is true. AI is changing the nature of the relationship between humans and machines, and that’s a heavy statement with a lot of consequences. It’s for this very reason I believe we need to focus on the question of ethics in autonomous and intelligent systems.

The investments we’re seeing in AI suggest the market is going to surge in the next few years. Analytics provider SAS just invested $1 billion in AI, and this is just one of many companies that see AI as the future of its business.

The IEEE just released the second version of its “Ethically Aligned Design” whitepaper—a vision for prioritizing human wellbeing with autonomous and intelligent systems. The IEEE aims to encourage a public discussion about establishing ethical and social implementations for intelligent and autonomous systems and technologies.

It also wants to inspire the creation of standards and facilitate national and global policies based on these guiding principles. Why do we need something like this? Simply, we are still in very unchartered territory. Let’s take autonomous vehicles for an example. It doesn’t take too much brainpower to start weaving a complicated ethical web of conundrums involving self-driving vehicles.

When inanimate objects make decisions that could impact the safety of human beings, the pressure is on the humans developing that AI to think about the outcomes of the machines’ decisions.

If an autonomous vehicle is in a lose-lose situation—for whatever reason, a collision is imminent and the AV must decide between two possible outcomes in a split second—what decision does it make?

Does it prioritize passenger safety? Does it prioritize pedestrian safety? It’s sticky; it’s uncomfortable; but it’s so important to talk about. We can’t leave questions like these unanswered and expect it all to just work itself out. It won’t work itself out.

The IEEE says the ethical design, development, and implementation of autonomous and intelligent technologies should be guided by certain principles, and dozens of international thought leaders collaborated to come up with these principles.

They are:

  • Human rights. Autonomous and intelligent technologies should not infringe on internationally recognized human rights.
  • Wellbeing. The design and development of these technologies should prioritize metrics of wellbeing in their design and use.
  • Accountability. Designers and operators of autonomous and intelligent technologies should be held responsible and accountable.
  • Transparency. Autonomous and intelligent technologies should operate in a transparent manner.
  • And, finally: awareness of misuse. In the design, development, and deployment of these technologies, we must minimize the risks of their misuse.

I like what the IEEE is setting forth for several reasons. For one thing, it’s looking for ways to provide legal frameworks for accountability. It’s also recognizing the right for people to define access and provide informed consent about the use of their personal data as a “fundamental need.”

This is helpful rhetoric as we move forward into a digital age, especially as data starts to function more and more like a sort of currency.

I also appreciate that the IEEE is advocating for education and awareness of the benefits and risks associated with intelligent and autonomous systems.

One consequence of the increasing use of AI and autonomous and intelligent systems in our society is we are being confronted with new questions about the ethics of these technologies.

Google’s earlier attempt to create an AI ethics panel quickly got nixed about a week after it was announced it. But it is important to address the question of big tech players and ethics.

Tech giants like Google, Amazon, Facebook , and Microsoft have all made a show of supporting ethics-in-AI conversations.

Amazon and the NSF (National Science Foundation) have partnered to support research focused on “Fairness in AI,” and that sounds great.

Earlier this year, Facebook announced it has partnered with the technical University of Munich to create an independent AI ethics research center called the Institute for Ethics in Artificial Intelligence.

And Microsoft has an internal AI and ethics board and offers “Guidelines for Responsible Bots” on its website. Microsoft has also published its own “AI Principles,” which include fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability.

AI is evolving very quickly. How it will impact humans is yet to be known. But it’s certain that our world will be forever changed by AI.

Want to tweet about this article? Use hashtags #M2M #IoT #ethics #autonomous #bigdata #AI #artificialintelligence #machinelearning #bigdata #PeggySmedley #digitaltransformation #cybersecurity #blockchain #manufacturing #automotive #AV #5G #IEEE