Several well known smart people have been voicing fears about artificial intelligence being a threat to to humanity. For example,
But I fail to see any mention of a timeline. Is it decades, centuries or millennia? Predictions without a date attached to them are meaningless since they can never be proven wrong. I disagree with the above, so it's nice to see there are plenty of others who do also.
David W. Buchanan, member of IBM Watson "Jeopardy!" system team
Science fiction is partly responsible for these fears. A common trope
works as follows: Step 1: Humans create AI to perform some unpleasant
or difficult task. Step 2: The AI becomes conscious. Step 3: The
AI decides to kill us all. As science fiction, such stories can be
great fun. As science fact, the narrative is suspect, especially
around Step 2, which assumes that by synthesizing intelligence, we
will somehow automatically, or accidentally, create consciousness. I
call this the consciousness fallacy. It seems plausible at first,
but the evidence doesn't support it. And if it is false, it means
we should look at AI very differently.
Erik Sofge: Don't believe the hype about artificial intelligence, or the horror
Forget about the risk that machines pose to us in the decades ahead.
The more pertinent question, in 2015, is whether anyone is going to
protect mankind from its willfully ignorant journalists.
...
Here's the letter at its most ominous, which is to say, not ominous at all:
 
"Because of the great potential of AI, it is important to research
 
how to reap its benefits while avoiding potential pitfalls."
...
To use the CNET and BBC stories as examples, neither includes
quotes or clarifications from the researchers who helped put
together either the letter or its
companion research document
...
The truth is, there are researchers within the AI community
who are extremely concerned about the question of artificial
superintelligence, which is why FLI included a section in the
letter's companion document about those fears.
But it's also true that these researchers are in the extreme minority.
Recent alarms over artificial intelligence research raise eyebrows at AI conference
"We're in control of what we program," Bresina said, noting it was
his own opinion and not an official NASA statement. "I'm not worried
about the danger of AI... I don't think we're that close at all. We
can't program something that learns like a child learns even --
yet. The advances we have are more engineering things. Engineering
tools aren't dangerous. We're solving engineering problems."
A malevolent AI will have to outwit not only raw human brainpower
but the combination of humans and whatever loyal AI-tech we are able
to command -- a combination that will best either on their own.
Eric Horvitz's position contrasts with that of several other leading thinkers.
A Microsoft Research chief has said he thinks artificial intelligence
systems could achieve consciousness, but has played down the threat
to human life.