'Explainability' and ethics for AI Researchers ramp it up at Silicon Valley's foremost artificial intelligence symposium

Fujitsu Laboratories Advanced Technology Symposium Make AI Trustworthy! Explainable and Ethical AI for Everyone

AI has arrived. A proven technology that is exciting the world and generating heated debate between staunch advocates and lukewarm adversaries. From its starry-eyed birth in the 1960s to the down-to-earth tasks it performs today, AI has become the ultimate disruptor. Now, after the feel-good era of discovery, researchers find themselves grappling with two deep ― and not unexpected ― questions that challenge further deployment of AI.

Where, and how, is AI leading us?

AI is pushing its own boundaries. The biologically inspired programming that allows neural networks to learn is reflected in today's AI. This "deep learning" as it is called, gives machines the ability to generate algorithms and discover solutions on their own. Currently, deep learning is only applied to narrowly defined tasks. But "narrow AI" is expected to rapidly morph into "general-purpose AI" ― a non-human entity able to perform any human tasks.

Which brings up the question of ethics. Researchers at a renowned Oxford University laboratory are trying to define how AI should interact with humans. Their efforts are helping define a set of ethics for general-purpose AI while providing a usage framework for narrow AI applications.

On Oct. 9, Fujitsu Laboratories held its annual Fujitsu Laboratories Advanced Technology Symposium 2018 in Santa Clara, California to discuss the latest advancements in AI. This year's theme was "Make AI Trustworthy! Explainable and Ethical AI for Everyone," a topic that shows many are still working out AI's place with humankind.

Fujitsu Laboratories Advanced Technology Symposium 2018

Explainable AI put to the test in Silicon Valley

As misuses of AI grow, researchers are paying more attention to ethics. In one case of abuse, deep learning was employed to create a video showing former President Barack Obama speaking completely made-up lines. In another, AI was used to "prove" that the U.S. regards Afro-Americans as having a higher recidivism rate than Caucasians. While the former was clearly intentional, the latter mistakenly skewed big data to produce an erroneous result.

In addition to AI's ability to create bias, the problem of "black-boxing" ― or the inability to see how AI actually achieves a specific outcome ― is creating problems for industries such as health care.

Both bias and black-boxing are creating strong headwinds to AI's further inclusion in business.

Shigeru Sasaki
CEO
Fujitsu Laboratories Ltd.

Shigeru Sasaki, CEO of Fujitsu Laboratories Ltd., noted that Fujitsu Labs chose the theme for this year's symposium because "labs wanted to put Fujitsu's explainable AI ― a new approach to a more transparent AI ― to the test in Silicon Valley, the global center of IT development."

Explainable AI makes artificial thought processes more transparent. It enables humans to explain how AI reached a certain conclusion, and provides deeper understanding of its reasons and judgment.

To do this, Fujitsu Labs introduced two technologies: Deep Tensor®, a unique method of deep learning that analyzes graphical data and is often used to illustrate relationships between things or people; and Knowledge Graph, a dataset of knowledge collected from different information sources.

Connecting inferences derived by Deep Tensor® to Knowledge Graph enables us to understand the reasons behind AI-generated findings and to make them explainable. Sasaki noted that "Fujitsu's AI satisfies the need for explainability" in mission-critical industries, such as healthcare and financial services.

Seishi Okamoto
Head of Artificial Intelligence Laboratory
Fujitsu Laboratories, Ltd.

Interestingly, Seishi Okamoto, Head of Artificial Intelligence Laboratory, Fujitsu Laboratories, Ltd., pointed out that "explainability in AI means that ethics is integrated into its thinking process." In other words, explainable AI is where technology merges with ethics, a rare occurrence.

Ethical AI in academia

The ethics of AI is being studied worldwide. The University of Oxford, University of Cambridge, Stanford University and University of California, Berkeley, as well as the volunteer research and outreach organization, Future of Life Institute, are studying this area. Some research focuses on the long-term influence of AI over humans and society, and the development of human-friendly AI.

Dr. Mariarosaria Taddeo
Deputy Director of the Digital Ethics Lab
Oxford Internet Institute
University of Oxford

The Digital Ethics Lab at Oxford Internet Institute tackles the ethical challenges posed by AI and digital innovation, with the goal of shaping the governance of new technologies. Dr. Mariarosaria Taddeo, Deputy Director of the Lab at Oxford Internet Institute, University of Oxford noted that the ethics of AI was already a focus of debates 15 years ago. Lately, as AI becomes pervasive with computational advancements, doctors, lawyers, policy makers and businesses have shown a great interest in the ethics of AI.

Requirements for AI ethics

Ethical analyses for the governance of AI are being developed throughout the world. As ethical analyses depend on moral and cultural values, they change depending from the context in which they are developed, prompting the question how can global ethical principles for the governance of AI can be defined.
Dr. Taddeo quoted a study conducted at the Digital Ethics Lab and authored by Jodh Cowls and Luciano Floridi, which identified in the four principles of bioethics, i.e. beneficience, non-maleficience, autonomy, and justice, four fundamental principles that provide the groundwork ethical analyses of AI.

"The argument surrounding AI ethics always requires balancing different factors," says Dr. Taddeo. For example, one may need to access personal data to further research on Alzheimer's. In this case, research requires access to vast amounts of medical-related big data, prompting the need to find a trade-off between privacy and the need to advance science. In other cases, balancing privacy and security might also be an issue to consider.

Focusing on AI's benefits while going forward

AI is a transformative technology. Like other transformative technologies, e.g. electric power or mechanical engines, AI is integrating in the fabric of our societies, reshaping social dynamics, disrupting old practices, and prompting profound transformations. This makes AI a new foundational technology in need of its own specific ethical framework.
AI-led transformations pose fundamental ethical questions concerning our environment, societies, and human flourishing. From industrial plants and roads to smart cities, AI prompts a re-design of our environment to accommodate the routines that make AI work. It is crucial to understand what values should inform this design, the benefits that will follow from it, and the risks implicit in transforming the world into a progressively AI-friendly environment.

"The study of AI ethics is a never-ending task," said Dr. Taddeo. It is a continuous process, learning where we are headed as technology advances.

Three principles for human-friendly AI

The Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, studies human-friendly AI. The center sponsors researchers from a variety of disciplines, including computer science, AI, robotics, politics, business, philosophy and sociology to make AI into a human-friendly system.

Mark Nitzberg, executive director of CHAI, noted three points to consider when evaluating human-friendly AI:

  1. 1. The robot's only objective is to maximize the realization of human values
  2. 2. The robot is initially uncertain about what those values are
  3. 3. Human behavior provides information about human values
Mark Nitzberg
Executive Director
Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Current AI development does not focus on human values. Rather, it targets efficiencies to accelerate processes or maximize profits. In theory it looks good, but over time it can cause problems. In some cases, AI could even control humans.

CHAI suggests that human values be integrated into AI applications. In a study of robots, the center shows an AI-enabled robot arm tasked with moving a cup, learning from human behavior to choose a less efficient and longer movement, but one with a lower risk of breaking the cup. Normally, the machine chooses the most efficient and shortest route. But in this case, AI incorporated human values, i.e., the importance of not breaking the cup, into the task.

Sharing human values is key

There are two interesting points in CHAI's approach to AI ethics. First, it focuses on "sharing values" to create human-friendly AI. The organization believes that the direct sharing of human values is more important than value judgments.

Secondly, when AI is confronted by uncertain objectives, it needs to observe humans to integrate values into its knowledge base. In this way, AI can be kept on a short leash, unburdened by uncertainty as it evolves and learns more about human preferences.

Nitzberg also noted that safety is another issue to consider in the age of general-purpose AI. If AI can recreate itself, it could conceivably control and domesticate humans. In order to prevent this, we need to make AI human-friendly, retaining its utility to maximize human values, not its own. This is fundamental to developing AI.

In order to make AI relevant to humankind, we need to integrate thoughts and insights from all disciplines ― technology, philosophy and sociology to name a few. As AI develops, new debates and experiments will unfold.

Author Information
Tetsushi Hayashi
Chief Researcher
Cleantech Institute, Nikkei BP Intelligence Group
Graduated in 1995 from the School of Engineering, Tohoku University and employed at Nikkei Business Publications, Inc. Assistant editor-in-chief and reporter in the fields of ITC technologies, product development and standardization for Nikkei magazines, including Nikkei Data Pro, Nikkei Communication and Nikkei Network. From 2002, editor-in-chief of Nikkei Byte. From 2005, editor-in-chief of Nikkei Network. From 2007, editor-in-chief of Nikkei Communication. After serving as publisher of ITpro, Nikkei Systems, Tech-On!, Nikkei Electronics, Nikkei Monozukuri and Nikkei Automotive, became the overseas business manager, where he currently serves. Since August 2016, he has been authoring a column titled "The Future of S Self-Driving" for The Nikkei Online Edition. Published Sekai Jido-Unten Kaihatsu Project Sorun (Comprehensive List of Global Self-Driving Projects) in December 2016 and Sekai Jido-Unten/Connected Car Kaihatsu Sorun (Comprehensive List of Global Sef-Driving/Connected Car Developments) in December 2017. Has also been serving as a judge for the CEATEC award committee since 2011.

(In Collaboration with Noriko Takiguchi)