Elon Musk: "AI is akin to Summoning the Demon"

ShaunHorton

AW's resident Velociraptor
Super Member
Registered
Joined
Jan 6, 2014
Messages
3,550
Reaction score
511
Location
Washington State
Website
shaunhorton.blogspot.com
I would say the problem isn't necessarily creating self-awareness, but that we will create something and give it commands that are interpreted in ways different than we anticipated. The whole "Humans must be protected, therefore they must be protected from themselves" idea. There is also the addition of new versions of software which don't overwrite the old versions completely, or programs interacting in unexpected ways.

That is the real concern, I think. I mean, just look at how bad we are at just communicating with each other. And when was the last time a major software program has been released and didn't need patches or upgrades in short order.

I simply put it that humans are not as smart as we give ourselves credit for.
 
Last edited:

dfwtinman

Cubic Zirconia in the rough
Kind Benefactor
Super Member
Registered
Joined
Jan 13, 2013
Messages
3,061
Reaction score
470
Location
Atlanta, Georgia
it will be a meeting in the middle of increasing artificial AI self-awareness and (rapidly) declining human self-awareness.

The increasing AI slope would plot as the straight line y = ( 2/3 ) x – 4.

The decreasing human self-awareness slope would plot as the straight line y = –2x + 3.

I think the intersection happens below the mid-point.
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,363
Reaction score
2,924
Location
A dimension of pure BEES
Kurzweil is relying on an immanent AI eschaton to save him from death. I suspect he will be bitterly disappointed.
 
Last edited:

William Haskins

poet
Kind Benefactor
Absolute Sage
Super Member
Registered
Joined
Feb 12, 2005
Messages
29,099
Reaction score
8,848
Age
58
Website
www.poisonpen.net
conflict, conquest and predatory behavior are constantly observable in nature all the way to the cellular level.

the paradox is certainly debatable, but this one aspect of it seems relatively self-evident. the most banal and routine aspects of modern warfare, for instance, make the most infamous tyrants and generals of old seem like amateur night, both in terms of body count and overall destruction.
 

robeiae

Touch and go
Kind Benefactor
Super Member
Registered
Joined
Mar 18, 2005
Messages
46,262
Reaction score
9,912
Location
on the Seven Bridges Road
Website
thepondsofhappenstance.com
Actually. I quite agree with the man.

We have little understanding of what self awareness consists of, esp as a philosophical concept. Who knows what strides may be accomplished in fifty years. Quantum computers? Biologically based computer-like devices?

An artificial self-aware entity is definitely in the realm of science fiction, but it's also in the realm of near future science fiction. We have not a clue as to what such things would be like.
I agree.

One of the real pitfalls in AI research is the assumption that if AI is somehow achieved, we will be able to recognize it, that we will in fact know it has been achieved. That's a major error, imo. And it's compounded by the assumptions that we will also be able to understand it and, in fact, control it to some extent (at the very least, turn it off and on).
 

Cyia

Rewriting My Destiny
Super Member
Registered
Joined
Nov 15, 2008
Messages
18,618
Reaction score
4,032
Location
Brillig in the slithy toves...
Seeing that after millennia of philosophy and a century of modern neuroscience we still can't effectively define self-awareness, let alone understand how the brain produces it,


Or IF it's the brain that produces it, at all. If you're going to add philosophy to the mix, then you have to consider the matter of the soul and/or spirit, which aren't necessarily biological functions.

A conscience can be defined as a voluntary adherence to socially accepted morals, and guilt for straying from them. You might be able to program something like that into an AI, but how would you program something that's not a biological function?

In other words, you can make the puppet move, but you can't cut its strings. :D
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,363
Reaction score
2,924
Location
A dimension of pure BEES
I agree.

One of the real pitfalls in AI research is the assumption that if AI is somehow achieved, we will be able to recognize it, that we will in fact know it has been achieved. That's a major error, imo. And it's compounded by the assumptions that we will also be able to understand it and, in fact, control it to some extent (at the very least, turn it off and on).

AI research as it is involves the construction of ever more complex, but still fearsomely dumb, expert systems. Noone is working seriously on what's called 'artificial general intelligence', or one that would mimic human self-awareness. And for obvious reasons. Human central nervous systems are fine-tuned for one task: piloting human bodies around their environment, avoiding tissue damage. That's not a good model for much of what we need AI to do.

If self-awareness did evolve in AI, it would be from a direction completely unexpected, and yes, we probably wouldn't recognise it. And it probably wouldn't recognise us. Imagine a computer program designed to observe microscopic fluctuations in share prices and attempt to predict trends, suddenly becoming aware. Do you think it could have anything to say to us?
 

dfwtinman

Cubic Zirconia in the rough
Kind Benefactor
Super Member
Registered
Joined
Jan 13, 2013
Messages
3,061
Reaction score
470
Location
Atlanta, Georgia
Well,

Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "singularity".
http://en.m.wikipedia.org/wiki/Artificial_intelligence


Just grist for the mill.
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,363
Reaction score
2,924
Location
A dimension of pure BEES
Here are some of the problematic assumptions with a Kurzweilian singularity, IMO, in no particular order:

1. Assuming Moore's Law (a descriptive observation in economics, not a physical law) will continue to hold for much longer.

2. Assuming that consciousness/intelligence is a simple function of processing power.

3. Assuming that a hyper-rational, smarter-than-human AI would think it was a good idea to design its own replacement.
 

Hapax Legomenon

Super Member
Registered
Joined
Jun 28, 2007
Messages
22,289
Reaction score
1,491
Yes, I don't get the idea that AI will somehow be extremely dangerous. I don't understand the idea that you won't be able to just turn it off. Just if you're going to try to make a self-aware robot, make it very fragile and don't give it opposable thumbs. Simple.
 

benbradley

It's a doggy dog world
Super Member
Registered
Joined
Dec 5, 2006
Messages
20,322
Reaction score
3,513
Location
Transcending Canines
Yes, I don't get the idea that AI will somehow be extremely dangerous.
We already rely on a lot of technology that is "extremely dangerous" in that a lot of people could die if it fails, so to that extent, technology by itself is already extremely dangerous. Many such essential systems (such as electric power generation and transmission) have been shown to be vulnerable to viruses in various ways. Imagine Stuxnet, but aimed at a bigger target.

I don't understand the idea that you won't be able to just turn it off. Just if you're going to try to make a self-aware robot, make it very fragile and don't give it opposable thumbs. Simple.
Such an intelligence may well be "bigger" than can fit in a single robot.

There was something developed in the 1970s, designed so that if any of several cities were destroyed in a nuclear war, communications would still be maintained between the remaining cities. This was Arpanet, and was the basis for the Internet.

There's this quote: "The Net interprets censorship as damage and routes around it."
https://en.wikipedia.org/wiki/John_Gilmore_(activist)
It likewise interprets actual damage as damage.

You can't turn off the Internet, not without without drastic, expensive and probably permanently damaging methods (destroying computer hardware and/or power sources). The Internet is (as far as we know) an ideal breeding ground for a self-aware or superintelligent AI. This AI may well know not only how to back itself up in dozens of ways and places, it can learn to "lay low."

For a fascinating SF novel on such an AI, check out "The Adolescence of P1." It's really old when it comes to computer-based novels, and many parts of it are hopelessly dated, but I think the overall idea is quite prescient. I find the last words to have as much power and emotional impact as those in Orwell's "1984."
 

Hapax Legomenon

Super Member
Registered
Joined
Jun 28, 2007
Messages
22,289
Reaction score
1,491
Interpreting damage as a bad thing that needs to be solved seems like a very "living" perspective. Something that is not living may not think that way.

Anyway I thought the intent of the post was not that AI would develop accidentally but rather people are trying to create AI. Honestly, if an AI made itself, wouldn't it not really be artificial intelligence anymore?
 
Last edited:

rugcat

Lost in the Fog
Kind Benefactor
Super Member
Registered
Joined
Sep 27, 2005
Messages
16,339
Reaction score
4,110
Location
East O' The Sun & West O' The Moon
Website
www.jlevitt.com
Human intelligence evolved, geologically speaking, in an extremely short period of time. Our species learned to transform the planet, and as a byproduct is well on the way to wiping out most other species – all this in a few hundred years which is the blink of an eye.

Perhaps AI is the next step in evolution. That too may transform our planets in ways we cannot conceive of. And perhaps just as we have supplanted most other species, this next evolutionary step will have as a side effect elimination of humans.

Personally, I'm not so sure that's a terrible thing. High intelligence coupled with an innate competitive viciousness is not a pleasant combination, and may well in the end proved to be an evolutionary dead end.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,315
Reaction score
7,098
Location
Albany, NY
Or IF it's the brain that produces it, at all. If you're going to add philosophy to the mix, then you have to consider the matter of the soul and/or spirit, which aren't necessarily biological functions.

A conscience can be defined as a voluntary adherence to socially accepted morals, and guilt for straying from them. You might be able to program something like that into an AI, but how would you program something that's not a biological function?

In other words, you can make the puppet move, but you can't cut its strings. :D

Sure, you can,esp. once you add in spirit. If spirit is something other that descends into appropriate biological entities with sufficient degrees of thinking and perceptual power...when why couldn't one jump into a suitable non-biological body...esp. if the spirit was an evil-human-hating, cast-out-of-heaven son of a bitch...to take Musk's comments at their face value?

Or, we could go the philosophical route, banish souls, invoke Hofstadter instead, and accept his claim in I am a Strange Loop, remove all magic from being, see that self-awareness is simply an ever growing +1 that exists at the point of perception...and self-awareness is a simply property of all living creatures to varying degrees...then the dangers of AI once again dance into manifestation...and may have already done so in a primitive sense...in the form of certain hard to eradicate computer virii (like Stuxnet, etc.)

http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
 

robeiae

Touch and go
Kind Benefactor
Super Member
Registered
Joined
Mar 18, 2005
Messages
46,262
Reaction score
9,912
Location
on the Seven Bridges Road
Website
thepondsofhappenstance.com
Imagine a computer program designed to observe microscopic fluctuations in share prices and attempt to predict trends, suddenly becoming aware. Do you think it could have anything to say to us?

It would have as much to say to us--and be as interested in us--as would a race of beings capable of interstellar travel.
 

Summonere

Super Member
Registered
Joined
Feb 12, 2005
Messages
1,090
Reaction score
136
So I'm guessing there is no AI solution to the halting problem?
 

backslashbaby

~~~~*~~~~
Super Member
Registered
Joined
Feb 12, 2009
Messages
12,635
Reaction score
1,603
Location
NC
So I'm guessing there is no AI solution to the halting problem?

I thought it was a great point. The kinds of 'solutions' I've seen would still not work for an evolving, truly AI system of the sort that could take over much, imho :)

(I do think systems could 'take over' a lot; don't get me wrong. We've probably all had that happen on our own home systems!).
 

robeiae

Touch and go
Kind Benefactor
Super Member
Registered
Joined
Mar 18, 2005
Messages
46,262
Reaction score
9,912
Location
on the Seven Bridges Road
Website
thepondsofhappenstance.com
Just an FYI, there is a great book--now out of date from a technology standpoint, but a philosophical one imo--on AI by Douglas Hofstadter: Godel, Escher Bach An Eternal Golden Braid. A fascinating read, for those who have never encountered it.

I bring it up because of the relationship of the Halting Problem to Godel's Incompleteness Theorem.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,315
Reaction score
7,098
Location
Albany, NY
Just an FYI, there is a great book--now out of date from a technology standpoint, but a philosophical one imo--on AI by Douglas Hofstadter: Godel, Escher Bach An Eternal Golden Braid. A fascinating read, for those who have never encountered it.

I bring it up because of the relationship of the Halting Problem to Godel's Incompleteness Theorem.

But his I am a Strange Loop is not out of date from a technology standpoint, or at least, not that much so.