- The rise of artificial intelligence (AI) raises not only technical questions but also ethical and philosophical ones.
- With more-powerful tech such as nuclear weapons, synthetic biology, or superhuman AI in particular, we don’t want to learn from mistakes.
- If today’s fairly low-tech smartphones can persuade their owners to stare at them for a large fraction of the day, a superhuman AI could really hack our human reward system.
- There are many tough open questions that we still need to answer if we’re going to reap the benefits of AI while avoiding problems.
- I’m cautiously optimistic that we can create an awesome future with AI if we win this race between the growing power of the technology and the growing wisdom with which we manage it.
Published quarterly, Profit is distributed to more than 110,000 C-level executives and provides business leaders with a road map on turning their technology investments into top and bottom line advantages. Profit Online is the Web version of the magazine, delivering expanded, exlusive content that illuminates the business impact of technology and provides industry and line-of-business intelligence for your specific challenges.
@java: Post Contents:
#AI will only get more powerful. Will we be ready for it?
The rise of artificial intelligence (AI) raises not only technical questions but also ethical and philosophical ones. Should superhuman AI be controlled when it arrives—and if so, by whom? If robots are doing all of our work, how will humans find meaning?
“Whatever we humans want the future to be, AI can help us accomplish this,” says MIT professor Max Tegmark, author of Life 3.0. “A big part of the problem is we don’t talk much about what we actually want.” So let’s start talking about it.
What will happen once machines can’t just do a lot of our jobs, but can outsmart us at all tasks? If machines can produce all that we need, then we all can effectively get a free vacation for the rest of our lives. If a single power owns all the machines and everybody else starves, that may be a less rosy scenario.
With more-powerful tech such as nuclear weapons, synthetic biology, or superhuman AI in particular, we don’t want to learn from mistakes. We want to get things right the first time because it might be the only time we have.
You could easily program a modern aircraft to know that it should never, under any circumstances, fly into buildings. Or teach an industrial robot that it’s not so important to install a particular car part right now that it’s worth crushing somebody.
Whatever we humans want the future to be, AI can…