AI – Threat or Not

BloggausMany of you have read about the little feud between the two tech kings Mark Zuckerberg and Elon Musk. The CEOs of the world’s most successful companies are no doubt both smart men. But as it so often happens with smart and super ambitious people, they end up arguing about something, this time it was artificial intelligence (AI).


The Feud

Why I’m writing about this is because I read the news about their debate too, and after that, I happened to stumble on a super interesting scientific article (found it in our office’s bathroom…) about the same issue. Few of us can say that they’ve never thought about the possibility of losing their job to a machine at some point in the future. AI and its power is at the same time intriguing and scary. I thought it would be nice to summarise the article (When Will AI Exceed Human Performance? Evidence from AI Experts) for you, because it sums up the predictions of the real experts in the field of AI. Also, based on the article, we can say who was right, Musk or Zuckerberg. Although, both of them clearly have their own agendas behind their opinions, it’s still kind of fun to figure out which one was (more) right.

But first a short background about the fight at the sand box. Musk has for a long time been concerned about AI’s possible power in the future. He fears that AI might have the capability to become something very dangerous and even apocalyptic. Zuckerberg on the other corner has been more carefree about the possible negative consequences of it. Then Zuckerberg decided to open his mouth in a Facebook Live broadcast, saying that Musk was a “naysayer” and that his fears of doomsday were unnecessary negativity. Musk came back in Twitter (of course) with a bit more insulting line: “I’ve talked to Mark about this. His understanding of the subject is limited.”

When will AI outperform human performance?

Next, we’ll jump to the real question(s) at hand, when will AI exceed human performance and will it become a threat to us at some point? But before that, a short credibility check about the article. It was written by Katja Grace and his colleagues at the Future of Humanity Institute at the University of Oxford. They sent a survey to all researchers who published in 2015 in the two premier venues for peer-reviewed research in machine learning. 352 researchers answered the survey providing a good basis for the data to represent true expertise in the field of AI.

The results of the survey are somewhat intimidating, at least in my opinion. I’d guess that many people understand that machines will be totally competent in doing repetitive, simple work, following instructions (as they’ve done for quite a while already). But the fact that machines will probably learn how to do things that require more cognitive skills and even creativity, that’s something totally different. Or what would you think about a machine writing a best seller? According to the experts, a best seller written by a computer could be reality in 2049 (mean of the answers).

On a side note, if a machine can write a best seller, can’t it then repeat the same task all over again? And when multiple machines can do that, what will happen to literature? Well, these things seem a bit overwhelming, but at least 2049 is far away, right? Well, the same guys predict that machines can write a high school essay as well as humans in less than ten years. Below you’ll find some other tasks that machines will be able to do better than us in the future.

In general, the experts believe that with a 50% probability, machines will outperform humans in all tasks, in 45 years. Also, they estimate that with 50% probability, AI will automate all human jobs in 120 years. It’s important to note that these figures are means. For example, the Asians expect HLMI (High-level machine intelligence, which is achieved when unaided machines can accomplish every task better and more cheaply than human workers) to happen in 30 years, while North Americans see it happening in 74 years.

According to the article by Grace et.al: “HLMI is seen as likely to have positive outcomes but catastrophic risks are possible.” When respondents were asked whether HLMI would have a positive or negative impact on humanity over the long run, the median probabilities for different outcomes were as in the image below.

So, according to the experts, there is a mean probability of 15%, that high level machine intelligence will result in a bad or extremely bad (“e.g. human extinction) scenario. No wonder the same experts hope that more resources would be allocated into minimizing the potential risks of AI. 48% of the surveyed think that research on minimizing risks of AI should be prioritized more than at the moment of the survey (with only 12% wishing less).

We should note that all the responses were given in 2015. Two years is a long time with the current speed of development. For example, the experts estimated that AI would be better than humans in Go (the game) by 2027. However, Google’s AI has already done that. And what comes to Google’s AI, it also happened to create a new artificial language of its own to translate other languages, and it did this without anyone asking. Edit: Same kind of thing actually just happened with Facebook’s AI robot. In their case, they switched off their robot because it reportedly created its own language. Maybe Zuckerberg secretly is afraid of AI.

Conclusion

I believe it’s safe to say that sh*t is getting real. In my opinion, now the most important thing is, how will society cope with these seemingly inevitable changes. Will there be an apocalypse of some sort (Musk’s scenario), or will we create systems and plans that will keep the AI under control? Moreover, will the decision makers around the world strive to maintain human jobs, or will the world continue with its thrive to increase productivity with all means necessary? I don’t think anyone can answers these questions at the moment. We just have to wait and hope for the best.

Now back to the original question, who knows best, Musk or Zuckerberg? Well, sorry to disappoint you, but I don’t have a definite answer to this question either. I’d say Musk is more in the right path, but I might just say it because I don’t want him going on Twitter saying: “I’ve talked to Toni about this. His understanding of the subject is limited.”

Pinterest
Bloggauksen infoboxi
VALA Group Oy

Lisätietoja

VALA Group Oy:n yritysprofiili Kotisivut

Tagit

Jos tarjontatagi on sininen, pääset klikkaamalla sen kuvaukseen

Omat tagit

Artificial Intelligence
tekoäly

Siirry yrityksen profiiliin Siirry VALA Group Oy kotisivuille Yrityshaku Referenssihaku Julkaisuhaku

VALA Group - Asiantuntijat ja yhteyshenkilöt

VALA Group - Muita referenssejä

VALA Group - Muita julkaisuja

Siirry VALA Group Oy kotisivuille Siirry yrityksen profiiliin Yrityshaku Referenssihaku Julkaisuhaku

Digitalisaatio & innovaatiot blogimedia

Blogimediamme käsittelee tulevaisuuden liiketoimintaa, digitaalisia innovaatioita ja internet-ajan ilmiöitä

Tiedon hakemisen lyhyt historia - Digitaaliset alustat jäsentävät nyt maailman tietoa uudelleen
Suomen 50 suurinta yritystä
Brutaalit nettisivut

Etusivu Yrityshaku Pikahaku Referenssihaku Julkaisuhaku Blogimedia