Beyond Asimov: how to plan for ethical robots

Beyond Asimov: how to plan for ethical robots   #technology #ArtificialIntelligence

  • The agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • A robot may not harm humanity or, through inaction, allow humanity to come to harm.
  • * A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Robots without ethics** It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

Read the full article, click here.


@3tags_org: “Beyond Asimov: how to plan for ethical robots #technology #ArtificialIntelligence”


As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

* A robot may not injure a human being or, through inaction, allow a human being to come to harm.
* A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
* A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

:break:

#### **Asimov knew they weren’t perfect**
Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

:break:

#### **Robots without ethics**
It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

![My Image](https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/124046/area14mp/image-20160525-25209-sgeji4.png)
###### -> Asimov’s laws are in a particular order, for good reason. Randall Munroe/xkcd, CC BY-NC <- The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality). @[youtube](https://www.youtube.com/watch?v=4DQsG3TKQ0I) ###### ->Catastrophe results from giving too much power to artificial intelligence.<- Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference. :break: #### **Toward defining robot ethics** While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings. These properties are inspired by a number of sources including the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and recent work on the cognitive science of morality and ethics focused on neuroscience, social psychology, developmental psychology and philosophy. The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society. * If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as people do. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises. * Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.) * A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules. * To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others. * Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal. * Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision. * As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is. * A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind. * Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities. Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.


Beyond Asimov: how to plan for ethical robots

Facebook Code

Introducing DeepText: Facebook's text understanding engine  #MachineLearning #NLP

  • Using deep learning, we are able to understand text better across multiple languages and use labeled data much more efficiently than traditional NLP techniques.
  • It’s reasonable to assume that the posts on these pages will represent a dedicated topic – for example, posts on the Steelers page will contain text about the Steelers football team.
  • With deep learning, we can instead use “word embeddings,” a mathematical concept that preserves the semantic relationship among words.
  • Written language, despite the variations mentioned above, has a lot of structure that can be extracted from unlabeled text using unsupervised learning and captured in embeddings.
  • Text understanding on Facebook requires solving tricky scaling and language challenges where traditional NLP techniques are not effective.

Read the full article, click here.


@troykelly: “Introducing DeepText: Facebook’s text understanding engine #MachineLearning #NLP”


Stay up-to-date via RSS with the latest open source project releases from Facebook, news from our Engineering teams, and upcoming events.


Facebook Code

Facebook’s AI is almost as smart as you video

.@CNETUpdate: Facebook's A.I. is almost as smart as you

  • You may never have to leave Facebook thanks to its new DeepText engine that can understand your posts with “near-human accuracy.”
  • The disc-shaped Roll 2 plays a little louder, has significantly better wireless range and now includes UE’s Floatie accessory in the…
  • Microsoft backs out of making more consumer Windows Phones after cutting jobs from its…
  • Facebook’s AI is almost as smart as you: CNET Update
  • Twitter relaxes its 140-character limit

Read the full article, click here.


@CNET: “.@CNETUpdate: Facebook’s A.I. is almost as smart as you”


You may never have to leave Facebook thanks to its new DeepText engine that can understand your posts with “near-human accuracy.”


Facebook’s AI is almost as smart as you video

Wearable adoption more than doubled in past two years

Wearable adoption more than doubled in past two years | #MachineLearning #Apple #RT

  • The adoption of wearables has skyrocketed, rising from 21 percent of the U.S. population in 2014 to 49 percent in 2016, according to a report by consulting firm PwC .
  • Adoption of wearables declines with age, the report said.
  • The report found that consumers aged 35 to 49 are most likely to own smart watches.
  • And parents are significantly more likely to own not just one, but multiple wearable devices, the report said.
  • PwC’s report, “The Wearable Life: Connected Living in a Wearable World,” is an update to a report the company created in 2014.

Read the full article, click here.


@Ronald_vanLoon: “Wearable adoption more than doubled in past two years | #MachineLearning #Apple #RT”


The adoption of wearables has skyrocketed, rising from 21 percent of the U.S. population in 2014 to 49 percent in 2016, according to a report by consulting firm PwC.


Wearable adoption more than doubled in past two years

3 ways of how the IoT could dramatically help fighting climate change

3 ways the Internet of Things could help fight climate change | #MachineLearning #IoT #RT

  • The Internet of Things may well be considered a powerful ally in the fight against climate change, precisely at a time when global leaders advocate for more accessible, scalable and economically viable ways to protect our planet.
  • 3 ways the Internet of Things could help fight climate change
  • Learn more about our events which work to shape the Global, Regional and Industry agendas
  • In the last few years, an increasing amount of public-private initiatives have adopted IoT solutions, ranging from smart grids to energy efficiency applications.
  • Even more important is the fact that all those industries have the incentive to increase the adoption of IoT solutions, if they want to keep growing and stay competitive in the global arena.

Read the full article, click here.


@Ronald_vanLoon: “3 ways the Internet of Things could help fight climate change | #MachineLearning #IoT #RT”


The Internet of Things (IoT) is booming: the number of connected devices is expected to exceed the 60 billion threshold in 2016 and the IoT market is projected to generate $14.4 trillion in increased revenues and lower costs by 2022.


3 ways of how the IoT could dramatically help fighting climate change

The Making of a Cheatsheet: Emoji Edition — Emily Barry

Amazing! “The Making of a #MachineLearning Cheatsheet: Emoji Edition” by @emilyinamillion

  • Another thing I love is data science.
  • Linear regression especially is a thing you learn constantly in other contexts and may not realize it’s used in data science.
  • There are a few important things in data science that might have just ended up having their own section in a perfect world where a 3-D cheatsheet is a thing.
  • Nor did I set out to make an emoji cheatsheet.
  • Clustering is an extremely useful subset of data science that’s like classification, but not quite.

Read the full article, click here.


@dataandme: “Amazing! “The Making of a #MachineLearning Cheatsheet: Emoji Edition” by @emilyinamillion”


I’ve mentioned this before, but I really love emoji. I spend so much of my time communicating with friends and family on chat, emoji bring necessary animation to my words that might otherwise look flat on the screen. 💁


The Making of a Cheatsheet: Emoji Edition — Emily Barry

Elon Musk says humans need “neural lace” to compete with AI

Elon Musk says humans need

  • Elon Musk: Humans Need ‘Neural Lace’ to Compete With AI
  • “Something I think is going to be quite important-I don’t know of a company that’s working on it seriously-is a neural lace.”
  • Musk believes that a technology concept known as “neural lace” could act as a wireless brain-computer interface capable of augmenting natural intelligence.
  • Billionaire polymath Elon Musk has warned that humans risk being treated like house pets by artificial intelligence (AI) unless they implant technology into their brains.
  • Speaking at the Code Conference in California on Wednesday, Musk said a neural lace could work “well and symbiotically” with the rest of a human’s body.

Read the full article, click here.


@Newsweek: “Elon Musk says humans need “neural lace” to compete with AI”


Tesla founder believes humans need to add digital implants to their brains to compete with AI.


Elon Musk says humans need “neural lace” to compete with AI