October 2019:

Ethics and Artificial Intelligence

Researchers are asking and seeking answers to AI’s most complicated questions.

AI (artificial intelligence) is a collection of technologies that can enable a system to sense, comprehend, act, and learn. These technologies are empowering professionals to better make sense of the vast amounts of digital data that are collected by modern information systems. Today, AI is taking over some routine tasks and, because of this, some jobs have changed. Going forward, AI and machine learning will increasingly be used to augment existing professional capabilities and continue to try to draw more actionable structure from raw data. As this continues, it’s possible that some jobs will disappear.

A lot is being said on the topic of job displacement thanks to automation and AI, but many experts believe AI will create more opportunities for new tasks and new jobs than it will nullify—a net positive for society.

As it takes over certain tasks, AI will free humans up to think creatively and critically. While job displacement due to AI and automation, along with all of its social implications, is something to be concerned about, it may not be the most pressing issue surrounding artificial intelligence.

Often the value of technology is limitless: save money, save time, save resources, but perhaps one of the biggest benefits of technology is to save lives. This is evident in the story of this year’s 2019 Connected World Awards winner.

The most pressing AI issue may actually be ethics.

Major ethical concerns raised by AI systems have to do with transparency, fairness, accountability, bias, and safety. AI is currently being used for tasks like helping human judges, probation officers, and parole officers determine a criminal defendant’s likelihood of repeat offenses and helping people find suitable jobs based on algorithms that show relevant job postings to candidates on employment platforms. In these scenarios and many others, any biases within the AI algorithms could have a real impact on people’s lives.

In industries like manufacturing and healthcare, AI technologies are helping organizations innovate and meet society’s evolving needs in a connected world. However, questions remain about the darker sides of AI, including the ethical conundrums raised in certain use cases. The next generation of researchers are already thinking about these conundrums; they’re asking and seeking answers to AI’s most complicated questions.

Transformative AI in Industry

David Regan, managing director of revenue at Accenture, says today, AI is being deployed to help humans with routine tasks, and, in the future, AI will support increasingly complex tasks. “AI can be seen as a tool to increase service, decrease costs, expedite delivery, and improve accuracy,” Regan says. “In a tight economy, those companies that can best differentiate on service, manage costs, and that are faster to market and more reliable will thrive. Taking advantage of the power (of) AI to drive these initiatives will give organizations real advantage.”

AI is playing an important role in companies’ digital transformations, offering the benefit of augmented intelligence and, eventually, the ability to complete intricate or multifaceted tasks. Gartner says the number of enterprises implementing AI grew 270% in the past four years and tripled in the past year, rising from 25% in 2018 to 37% in 2019.

Source: Gartner
Source: Gartner

AI technologies are helping businesses improve customer service, protect customers from fraud, and address skills shortages. Gartner’s 2019 CIO Survey suggests 52% of telco organizations deploy chatbots and 38% of healthcare providers rely on computer-assisted diagnostics. More than half (54%) of respondents said a skills shortage is their organization’s biggest challenge.

Gabriel Fairman, CEO of Bureau Works, says it’s easy to envision a world in which AI takes over mundane, repetitive tasks that are based on data. “In our case, we use AI to pair the right translator to the right job, for instance, unburdening the project management from that stress. It’s an example of a data-driven repetitive task that can be taken over by an algorithm more easily and effectively.”

Automation will make room for people to think creatively, perform abstract problem solving and critical thinking, and use their intuition to not only flag but also help guide their organizations through challenging scenarios.

“AI will challenge the workforce to capitalize on the human differential. I see that as a great thing, but it will be deeply transformative,” Fairman says. “By freeing up our time to think critically about things, AI can enable us to be more creative when facing challenges such as a tight economy.”

In industries like manufacturing, AI can help drive business efficiencies, maximizing margins and going through large sets of data in order to extract relevant decision points. Industry experts envision AI being applied to manufacturing to enable predictive maintenance, improve quality control, decrease design time, and reduce waste. Further, because AI technologies can automate certain tasks, AI will redefine humans’ roles in manufacturing, leading them to perform new and different tasks as part of their roles. For most people, change and the option to upskill will be a welcome opportunity. A minority of workers will struggle to adapt and will need additional support and guidance.

Explain

their rationale;
the reasoning,
whenever needed;

Characterize

their strengths and weaknesses

Compare

with other AI systems

Convey

an understanding of how they will behave in the future

Make

the enterprise scalable through intelligent decisions; decisions smarter by augmenting humans with machines.

Source: Accenture

“Accenture’s experience of deploying AI in call centers, in backoffice casework, and in mobile field situations has shown that the workforces are pleased to be released from routine tasks to more value-adding tasks,” says Accenture’s Regan. “Whether it is the use of biometrics to authenticate callers, voicebots or chatbots to handle routine queries and routine processing, or natural-language processing to scan documents for keyword/phrases, the ability of the machine to eliminate simple repetitive tasks has been welcomed. The workforce enjoys undertaking the more complex tasks.”

In healthcare, AI technologies can help doctors in their clinical decisionmaking by surfacing relevant information about a patient, augmenting clinical searches in the medical literature, detecting underlying physiological patterns from vital signs, enabling the early detection of clinical conditions, and facilitating diagnoses. AI may also accelerate the development of new treatments and drug discoveries, as well as enabling personalized cancer treatments and immunotherapy.

Elaine Nsoesie is an assistant professor and computational epidemiologist who leads a team of computational social scientists, public health experts, and computer scientists at Boston University’s Dept. of Global Health. In public health, Nsoesie says AI technologies have been useful in processing large datasets to find patterns relevant for understanding health and disease in communities. Examples include tools for monitoring reports of illness on digital platforms and forecasting temporal and spatial disease trends.

…As a result, the AI in healthcare market growth will be substantial, with a CAGR over 28% during 2019-2023…

In the future, AI will continue to impact industries like healthcare on a global scale. “In low-resource regions where access to clinicians and hospitals is limited, AI has the potential to improve disease diagnosis and access to health information,” Nsoesie says. For AI to reach its transformative potential, though, each industry needs to consider how bias could enter into the equation. “It is important for AI modelers to be transparent about the datasets they are using and how decisions are made regarding what variables to include/exclude,” she adds. “Equally important is the need to regularly assess whether models are producing their intended outcomes and to make changes to achieve fairness.”

Peggy’s Blog

PTC and Onshape Lead SaaS

In what will be the biggest acquisition to date for PTC, President and CEO Jim Heppelmann announced the company has signed an agreement to acquire Onshape, which creates a SaaS (software-as-a-service) product development platform combining CAD (computer-aided design) with data management and collaboration.

Read More

AI Bias Will Explode

We have been saying a lot about AI (artificial intelligence) these days. It’s no wonder that Google’s latest smartphone announcement, the Pixel 4, with AI-enhanced recording and transcribing app received so much attention.

Read More

AI Insights in Automotive

The race is on. Literally! In the world of AI (artificial intelligence) there are a lot of exciting companies making their mark in a pretty impressive way in the automotive industry.

Read More

Knock Knock, It’s Conversational AI

Do you believe conversational assistants will change the status quo? For this column, perhaps it’s important to take closer look at how the technology is evolving and what are the ethics behind conversational AI (artificial intelligence)?

Read More

Continental Puts Connectivity into Tires

We live in a world where we want to connect everything these days. We want to improve safety, save time, money, and even use connectivity to extend the life of our tires.

Read More

The Future of AI

Artificial intelligence technologies, or otherwise more commonly known these days as AI, are empowering professionals to better make sense of the vast amounts of digital data that are being collected by modern information systems.

Read More

Addressing Unwanted Bias in AI

As more industries deploy AI technologies in more situations, the industry must ask the right ethics-related questions. Frank Buytendijk, distinguished vice president and analyst at Gartner, can think of several off the top of his head. “How can we make sure that AI is human-centric and socially beneficial? Should humans always be in charge … but that doesn’t scale,” Buytendijk says. “Should AI augment human intelligence, or should it be able to develop its own style of thinking? And what is socially most beneficial, for instance, profit vs. jobs?”

Buytendijk says not all bias is bad. “The values you build into the learning/programming are bias too. You like your AI to behave in a certain way,” he explains. “Moreover, the flip side of bias is diversity.

The decisionmaking of a collective of algorithms, each biased in a slightly different way because of different learning actually makes the decisionmaking process more balanced. Extremes get filtered out, (and) decisions get validated by many different algorithms. In this case, organized bias or diversity actually helps the results improve.”

AI Ethics-Related Questions

Gartner analyst Frank Buytendijk lists some important questions the industry should be asking about AI and ethics.

  • How can we make sure AI is human-centric and socially beneficial?
  • Should AI augment human intelligence, or should it be able to develop its own style of thinking?
  • How can we make sure AI systems are fair in their decisions?
  • How can we eliminate undesirable bias?
  • Is it ok if AI algorithms manipulate behaviors?
  • How can we make sure AI systems are explainable and transparent?
  • Is it ok if people don’t know they are interacting with an AI instead of a human?
  • Should it always be possible to audit a decision?
  • How can we make sure AI is secure and safe?
  • How do we balance AI being smart and precise with respecting privacy?
  • How do we deal with unintended consequences of AI?
  • How do we make sure AI is proportionate to the problem it tries to solve?
  • How can providers of AI systems be accountable, even if the machine learning departs from its original programming?

AI that discriminates or perpetuates real-life gender or racial bias, however, must be addressed. There is currently a lot of academic work on studying how bias can arise in AI systems and developing methodologies for mitigating bias or adjusting standard AI algorithms to be more equitable. Angela Zhou, a PhD student at Cornell Tech in operations research and information engineering, is one of many bright minds in training looking into how ethics and AI intersect.

“There are many ethical concerns raised by AI,” Zhou says. “When AI is being used to direct decisionmaking, in particular in settings where there previously was human oversight or human decisions—whether it be caseworkers in social services or doctors in the healthcare setting—many might find it objectionable to be subject to decisions arising from algorithms rather than humans. Yet this requires deeply understanding and characterizing potential ‘tradeoffs’. While humans may have access to expert training in certain domains and have developed expertise in certain areas, AI tools offer consistency and the ability to make sense of statistical patterns in incomprehensibly complex or voluminous data.”

To address bias, Zhou suggests taking an interdisciplinary approach. “Overall, addressing bias is a deeply interdisciplinary question of using domain knowledge to reason about how bias might surface in AI systems, what interventions are appropriate for the particular problem context, and finally if there are appropriate technical interventions, appropriately using those to improve bias—however it ends up being defined,” she says.

Moses Namara, a human-centered computing researcher and PhD student at Clemson University is another person investigating AI ethics as part of his career-training path.

“During the course of making (AI) systems, countless decisions or tradeoffs are made,” Namara says. “This affects the way the algorithms behind these automated systems operate and make decisions.

At a high level, AI algorithms are meant to identify patterns in observed data, build models that explain that data and use those models to predict any other similar data that it encounters in the future. Thus, if the observed data is not large and as diverse, then we end up with … bias and unfairness.”

However, Namara points out that it is not the technology alone that raises ethical concerns, it’s the human use cases in which AI systems are deployed. “AI systems are meant to perform a variety of tasks. Therefore, we have to carefully think about the possible use and misuse cases of AI systems in bid to make them more grounded in our responsibilities to fellow humans,” he explains. “For example, facial recognition can be used to curb child trafficking (and) check in passengers but can also be misused as surveillance tool.”

How can we address bias in AI, then? These researchers and many other future leaders in academia, tech, and government are already on it. Namara says:

Incorporate privacy design in AI systems to be accountable to all people.

Increase data openness, so AI developers have access to large data sets for training purposes.

Hire ethicists to work in tandem with software developers and corporate decisionmakers.

Have clear processes for deliberation to correct.

Have clear legal standards to prevent abuse.

Transcription

Episode 634 | 10.01.19

Olivier Bloch: IoT Plug and Play

With IoT Plug and Play, developers can connect IoT devices to the cloud, without having to write a single line of embedded code.

Bureau Works’ Fairman says one point to remember about AI technologies is that we, as a society, have no idea where this journey will end. Nor is it clear how to properly govern the journey. “We are learning on the fly and prone to make magnanimous mistakes,” he says. The best scenario, then, is to work together to try to prevent as many mistakes as possible through creative, forward thinking.

Accenture’s Regan says AI is a powerful tool, and it should be treated as such. While sometimes, bias in AI is inherently true and can be left in place, in many circumstances, the bias has been introduced by the way training data has been accumulated, and it needs to be removed. “AI has potential to be used for good or nefarious purposes,” Regan concludes. “In having such potential, AI is no different to any other new powerful technology, and we must learn to recognize the limits we should apply to this technology.”

Want to tweet about this article? Use hashtags #M2M #IoT #AI #artificialintelligence #data #manufacturing #healthcare #bias #ethics #machinelearning #automation #privacy #digitaltransformation #smartcities #5G

 

Transcription

Episode 629 | 08.27.19

Don Korfhage: Results-Driven Communication

Manufacturers need to get started on the transformation of communication if they want to experience exponential growth in the very near future.

Guest Contributors