Why Artificial Intelligence Needs Some Emotional Intelligence

Why Artificial Intelligence Needs Some Emotional Intelligence

#AI #machinelearning

  • One of the theoretical advantages of software, artificial intelligence, algorithms, and robots is that they don’t suffer many human foibles.
  • The reality, of course, is different.
  • One of the problems is that, in many instances, the engineers who designed the superefficient code didn’t fully think through the impact on humans, the full possibilities of what humans could do with it, or of the capacity of their products to inflict harm or offense.
  • With robots and machines becoming more integrated into the human experience, it is all the more urgent for engineers to become familiar with the works of John Donne, John Locke, and Jean-Paul Sartre.
  • If we are going to empower machines, algorithms, and software to do more of the work that humans used to perform, we have to imbue them with some of the empathy and limitations that people have.

Robots are becoming chefs, and software is our new chauffeur, but their capacity for empathy leaves something to be desired.

@sbmeunier: Why Artificial Intelligence Needs Some Emotional Intelligence

#AI #machinelearning

One of the theoretical advantages of software, artificial intelligence, algorithms, and robots is that they don’t suffer many human foibles. They don’t get sick or tired. They are polite — or rude — to everyone in equal measure. They follow orders.

The reality, of course, is different. Technology is designed by humans in all their frailty. As a result, it is eminently capable of perfect human behavior. Software and algorithms have a hard time distinguishing fact from fiction. They can fall under the influence of people with malign intentions. And they can take good ideas too far.

Some examples: When a BBC reporter asked Google Home if the ex-president of the United States was supporting a coup, the answer was yes. People have been likewise dismayed to find that when they punch in queries about the Holocaust into Google, the top results are often sites that deny the event. A year ago, when Microsoft introduced a chatbot that would interact with humans, it was quickly taught to spout racist language. In 2011, bots on Amazon continually bid up the price of a book by a penny, leading to a US$23.6 million price for a text on fly genetics. The algorithms that power surge pricing at car-hailing apps can send the cost of a ride soaring in emergencies or snowstorms — precisely the times in which it may seem in poor taste to engage in surge pricing.

We may continue to anthropomorphize machines: Siri is our reference librarian, robots are becoming chefs, drones are morphing into delivery people, software is our new chauffeur. But although these have superhuman calculating, data-crunching, and memory capabilities, their capacity for self-awareness, judgment, empathy, and understanding leaves something to be desired.

One of the problems is that, in many instances, the engineers who designed the superefficient code didn’t fully think through the impact on humans, the full possibilities of what humans could do with it, or of the capacity of their products to inflict harm or offense. Code is too often crafted in a kind of beautiful intellectual isolation without an awareness or appreciation of the broader and messier context.

The markets for information are inherently inefficient. Bad information sometimes crowds out good information. People with malign intentions are willing to go to great lengths to cause damage. Every technology brings unintended consequences. Just because technology can do something doesn’t mean it should. Emotion can be as powerful a motivating factor as reason.

These ideas — and their implications for society, human interaction, and the economy — are bruited about in history, philosophy, and sociology classrooms. But I don’t have the sense that they are discussed with the same frequency or nuance in computer science courses, or tech incubators, or venture capital boardrooms.

It’s common to lament that we have too many liberal arts majors who haven’t taken STEM classes. And that’s probably true. But it’s possible the corollary is also true, that we have too many STEM majors who have not taken a sufficient number of liberal arts classes. If they did, they might have a greater appreciation for the power, pitfalls, and potential problems of the brilliant tools they’re devising — and for their impact on people. They might grapple more seriously — and more urgently — with the questions surrounding fake news, or understand why privacy is a social good, or why there are solid reasons to not let the free market run amok.

With robots and machines becoming more integrated into the human experience, it is all the more urgent for engineers to become familiar with the works of John Donne, John Locke, and Jean-Paul Sartre. If we are going to empower machines, algorithms, and software to do more of the work that humans used to perform, we have to imbue them with some of the empathy and limitations that people have.

Articles published in strategy+business do not necessarily represent the views of the member firms of the PwC network. Reviews and mentions of publications, products, or services do not constitute endorsement or recommendation for purchase.

strategy+business is published by certain member firms of the PwC network.

© PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Mentions of Strategy& refer to the global team of practical strategists that is integrated within the PwC network of firms. For more about Strategy&, see www.strategyand.pwc.com. No reproduction is permitted in whole or part without written permission of PwC. “strategy+business” is a trademark of PwC.

Why Artificial Intelligence Needs Some Emotional Intelligence