Elon Musk: "AI is akin to Summoning the Demon"

kuwisdelu

Revolutionize the World
Super Member
Registered
Joined
Sep 18, 2007
Messages
38,197
Reaction score
4,544
Location
The End of the World
Meh, as others have pointed out, we aren't really close to anything resembling AI as the mainstream perceives "AI", and most of the AI field isn't really working on anything like that in the first place.

There's definitely a conversation to be had on our over-dependence on technology, but most of the technology we really on would be spoiled by calling it anything like "intelligent".

And if you spoil them, they start demanding more e-cookies.
 
Last edited:

robeiae

Touch and go
Kind Benefactor
Super Member
Registered
Joined
Mar 18, 2005
Messages
46,262
Reaction score
9,912
Location
on the Seven Bridges Road
Website
thepondsofhappenstance.com
Will A.I destroy us?

Depends on the A.I.

And what you mean by destroy.
Will AI even care about us? Will it even notice us?

It's the supreme conceit: that we humans matter, to advanced aliens from another galaxy, or to an ultra-powerful artificial intelligence. I think Dr. Manhattan started to realize this, 'til he let his human emotions back in...
 

kuwisdelu

Revolutionize the World
Super Member
Registered
Joined
Sep 18, 2007
Messages
38,197
Reaction score
4,544
Location
The End of the World
Will AI even care about us? Will it even notice us?

It's the supreme conceit: that we humans matter, to advanced aliens from another galaxy, or to an ultra-powerful artificial intelligence. I think Dr. Manhattan started to realize this, 'til he let his human emotions back in...

I suppose, but we haven't even developed artificial intelligence on par with the lowest forms of biological life; why should we be worried about AI that represents a quantum leap from what we haven't even developed yet?

It's a fun question for science fiction, but it has zero real-world implications today or even in the near-future.
 
Last edited:

benbradley

It's a doggy dog world
Super Member
Registered
Joined
Dec 5, 2006
Messages
20,322
Reaction score
3,513
Location
Transcending Canines
So I'm guessing there is no AI solution to the halting problem?

Will AI even care about us? Will it even notice us?

It's the supreme conceit: that we humans matter, to advanced aliens from another galaxy, or to an ultra-powerful artificial intelligence. I think Dr. Manhattan started to realize this, 'til he let his human emotions back in...
It seems unlikely an "ultra-powerful artificial intelligence" would suddenly appear before much lesser, but successively more-powerful AIs are produced (and we recognize them as AIs).

The advanced aliens from another galaxy, however, are ready to snuff us all out the moment we've mined all the good stuff from the asteroids for them.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
I suppose, but we haven't even developed artificial intelligence on par with the lowest forms of biological life; why should we be worried about AI that represents a quantum leap from what we haven't even developed yet?

It's a fun question for science fiction, but it has zero real-world implications today or even in the near-future.

Sure, we have, they're called genetic algorithms and cellular automata, and there's a whole field of scientific study called alife that does exactly that.

http://en.wikipedia.org/wiki/Genetic_algorithm

http://en.wikipedia.org/wiki/Cellular_automaton

http://en.wikipedia.org/wiki/Artificial_life
 

Summonere

Super Member
Registered
Joined
Feb 12, 2005
Messages
1,090
Reaction score
136
Just an FYI, there is a great book--now out of date from a technology standpoint, but a philosophical one imo--on AI by Douglas Hofstadter: Godel, Escher Bach An Eternal Golden Braid. A fascinating read, for those who have never encountered it.

I bring it up because of the relationship of the Halting Problem to Godel's Incompleteness Theorem.
I`ve not read that work, but I have read Goedel's proof and see it as introducing the very computational conundrum the halting problem poses. Thus my curiosity.
 

kuwisdelu

Revolutionize the World
Super Member
Registered
Joined
Sep 18, 2007
Messages
38,197
Reaction score
4,544
Location
The End of the World
Sure, we have, they're called genetic algorithms and cellular automata, and there's a whole field of scientific study called alife that does exactly that.

http://en.wikipedia.org/wiki/Genetic_algorithm

http://en.wikipedia.org/wiki/Cellular_automaton

http://en.wikipedia.org/wiki/Artificial_life

Yeah, I know. I wrote that despite being familiar with that work.

I think it's all really cool from a tech perspective, but it's still thoroughly unimpressive compared to what mother nature's produced. Maybe I was exaggerating, but that's just my perspective.

The genetic algorithm stuff has been mostly proof of concept and tends to be less efficient in the problems it solves than throwing computational hardware at it, and software AI really isn't anything like it is in science fiction.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
Yeah, I know. I wrote that despite being familiar with that work.

I think it's all really cool from a tech perspective, but it's still thoroughly unimpressive compared to what mother nature's produced. Maybe I was exaggerating, but that's just my perspective.

The genetic algorithm stuff has been mostly proof of concept and tends to be less efficient in the problems it solves than throwing computational hardware at it, and software AI really isn't anything like it is in science fiction.

I can't argue that point...but I will add a 'yet' on to the end of your post...with sinister seasonal ominousness. ;)
 

benbradley

It's a doggy dog world
Super Member
Registered
Joined
Dec 5, 2006
Messages
20,322
Reaction score
3,513
Location
Transcending Canines
I`ve not read that work, but I have read Goedel's proof and see it as introducing the very computational conundrum the halting problem poses. Thus my curiosity.
GEB is Hofstadter's first work and interesting in its own way, but I very much enjoyed "The Mind's I" cowritten with Daniel Dennet (yes, one of the "Four Horsemen" of atheism). It's not in any way rigorous, but it demonstrates (among other things) how human emotion can be manipulated.

I recall having "I am a strange loop" around, but haven't read much of it. Perhaps I should push through it.
I can't argue that point...but I will add a 'yet' on to the end of your post...with sinister seasonal ominousness. ;)
Perhaps the largest yet most unfortunate book on cellular automata is "A New Kind Of Science." It has a "rigorous" dissection of 256 rather simple automata, and the "life evolution" or lack thereof of each. I suggest going to a bookstore or library to read it rather than buying it, unless you can get it for cheap. Be sure to check out its reviews, available everywhere on the Internet.

And for hardware there's this:
http://www.parallella.org/
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
GEB is Hofstadter's first work and interesting in its own way, but I very much enjoyed "The Mind's I" cowritten with Daniel Dennet (yes, one of the "Four Horsemen" of atheism). It's not in any way rigorous, but it demonstrates (among other things) how human emotion can be manipulated.

I recall having "I am a strange loop" around, but haven't read much of it. Perhaps I should push through it.

/

It having been some few years ago that I read it, but as I recall Hofstadter's point in I am a Strange Loop is that the appearance of self-awareness is essentially the halting problem itself...and that biological computers like brains' recursive and paradoxical nature is what we call consciousness itself...it keeps running never finishing it's never-ending input until it stops running for mechanical reasons. I remember liking it a lot when I read it, so I do recommend it.
 

Zoombie

Dragon of the Multiverse
Super Member
Registered
Joined
Dec 24, 2006
Messages
40,775
Reaction score
5,947
Location
Some personalized demiplane
I'm not worried.

I find the worrying terribly misplaced, in fact.

Me too.

Basically, it's another way for people to be scared of the only possible course we have - to continue to refine and improve on our tools so that we can make the world better and maybe one day add other worlds to the list so we can't get taken out by a lucky asteroid.
 

benbradley

It's a doggy dog world
Super Member
Registered
Joined
Dec 5, 2006
Messages
20,322
Reaction score
3,513
Location
Transcending Canines
Me too.

Basically, it's another way for people to be scared of the only possible course we have - to continue to refine and improve on our tools so that we can make the world better and maybe one day add other worlds to the list so we can't get taken out by a lucky asteroid.
The scariest thing that could happen for people currently alive is for current technology to fail. Billions of people would starve.

But this is getting off the topic of AI, unless an AI intentionally causes it...
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
The scariest thing that could happen for people currently alive is for current technology to fail. Billions of people would starve.

But this is getting off the topic of AI, unless an AI intentionally causes it...

Ssshhhh...don't give Ultron any ideas....
 

Albedo

Alex
Super Member
Registered
Joined
Dec 17, 2007
Messages
7,363
Reaction score
2,924
Location
A dimension of pure BEES
The scariest thing that could happen for people currently alive is for current technology to fail. Billions of people would starve.

But this is getting off the topic of AI, unless an AI intentionally causes it...

Our best bet is to make sure the machinery that keeps us alive is the same machinery that keeps our AI overlords alive.

Yeah, we basically need to be the AI's gut bacteria.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
Me too.

Basically, it's another way for people to be scared of the only possible course we have - to continue to refine and improve on our tools so that we can make the world better and maybe one day add other worlds to the list so we can't get taken out by a lucky asteroid.

Just curious, why isn't living in sustainable harmony with nature with minimum technology another possible course open to us? Is it because of the worry of asteroids, gamma bursts, roving black holes, supernovas, etc.? Because to me, you've got to die of something, and that applies to civilizations and species, and much as individuals.
 

Don

All Living is Local
Super Member
Registered
Joined
May 28, 2008
Messages
24,567
Reaction score
4,007
Location
Agorism FTW!
Just curious, why isn't living in sustainable harmony with nature with minimum technology another possible course open to us? Is it because of the worry of asteroids, gamma bursts, roving black holes, supernovas, etc.? Because to me, you've got to die of something, and that applies to civilizations and species, and much as individuals.
Minimum technology to support how many people? Seven billion? Today we live with the "minimum technology" to support those seven billion. Lower the tech level and a few billion will have to die. Which ones?

"We've" painted ourselves into a corner by covering up the full costs of much of the tech that keeps the world working today, creating an extremely wasteful technological system.

Tech can be efficient, but only if the full costs of choices are passed on to the consumers so they can make intelligent decisions. Not gonna happen in today's political environment, where everybody wants somebody else to pay the piper.
 

Diana Hignutt

Very Tired
Kind Benefactor
Super Member
Registered
Joined
Feb 13, 2005
Messages
13,314
Reaction score
7,098
Location
Albany, NY
Minimum technology to support how many people? Seven billion? Today we live with the "minimum technology" to support those seven billion. Lower the tech level and a few billion will have to die. Which ones?

"We've" painted ourselves into a corner by covering up the full costs of much of the tech that keeps the world working today, creating an extremely wasteful technological system.

Tech can be efficient, but only if the full costs of choices are passed on to the consumers so they can make intelligent decisions. Not gonna happen in today's political environment, where everybody wants somebody else to pay the piper.

I'm not sure that's the truth though. Much of the burden on the ecosystem from over population comes from the industrial/tech/consumption part of the equation which only a billion or so really benefits from, not necessarily the total head count. Further, how many resources are withheld from the population by a few? I think the whole over-population thing is either an exaggeration, is based on the first world's tech taking over the whole world, and lastly, could be taken care of within several generations voluntarily (I don't have kids, I made that choice)--and I concede that last one is sketchy, unlikely, but not out of the realm of possibility.

Your second paragraph reads to me as supporting a minimum technological society, not against it.

Once again, I'm not sure we have to assume that the future in all it's possible courses (and I hold there is more than one) are dependent upon the maintenance of today's political environment, and will, in fact, unfold in spite of it.

Perhaps, AI if it evolves... it will sort all of our problems out for us...but I'd rather we do it ourselves, like responsible grownups with the idea of leaving a better world for those who come later, not an ever-worsening cycle of same old, same old...until nothing can live here. But, that's just me.
 

Don

All Living is Local
Super Member
Registered
Joined
May 28, 2008
Messages
24,567
Reaction score
4,007
Location
Agorism FTW!
I'm not sure that's the truth though. Much of the burden on the ecosystem from over population comes from the industrial/tech/consumption part of the equation which only a billion or so really benefits from, not necessarily the total head count. Further, how many resources are withheld from the population by a few? I think the whole over-population thing is either an exaggeration, is based on the first world's tech taking over the whole world, and lastly, could be taken care of within several generations voluntarily (I don't have kids, I made that choice)--and I concede that last one is sketchy, unlikely, but not out of the realm of possibility.

Your second paragraph reads to me as supporting a minimum technological society, not against it.

Once again, I'm not sure we have to assume that the future in all it's possible courses (and I hold there is more than one) are dependent upon the maintenance of today's political environment, and will, in fact, unfold in spite of it.

Perhaps, AI if it evolves... it will sort all of our problems out for us...but I'd rather we do it ourselves, like responsible grownups with the idea of leaving a better world for those who come later, not an ever-worsening cycle of same old, same old...until nothing can live here. But, that's just me.
Long term, I think you're right. It's the short term that's going to be extremely painful for a large portion of that seven billion. Although by short term, I'm speaking decades, not years. I think an economically sane world is the only one that has any chance of supporting the population levels we see today, and more.

That means the politically-induced misallocations that keep billions poor and a relative few who are politically well-connected massively rich (in paper assets) have to shake themselves out. I suspect that realignment won't happen without a lot of pain, but I have no doubt it will happen.

The productive class could maintain a healthy planet with a population equal to or exceeding what the world supports today. OTOH, no business, country or planet can afford the massive overhead, restrictions and misallocation that the political class represents. Look at the condition of those countries with the most powerful political classes, and the conditions we see here as the political class grows in power, and it's not hard to see what that overhead does to productivity and innovation.

The ascendency of the political class invariably leads to the benefit of the few at the expense of the many. See also: North Korea, Cuba and the history of despotic rulers throughout history, and contrast it with the massive explosion of innovation and productivity that was unleashed when one country, for a brief time, broke free of the boundaries dictated by a small group of the politically-privileged.

Thankfully, the internet has come along and created a new place, outside the tight control of rulers, where that level of innovation is once again possible... and I don't believe the ruling class can stuff that particular genie back in its bottle.

Long term, history is the story of the political class losing its grasp on the productive class. Long term, the political class loses. It's just a matter of time; I'd guess decades at the most.

Or, as you said:
Once again, I'm not sure we have to assume that the future in all it's possible courses (and I hold there is more than one) are dependent upon the maintenance of today's political environment, and will, in fact, unfold in spite of it.

The battle today is between a resurgence of The Enlightenment and a return to the Dark Ages. I'm betting on The Enlightenment in the long run, although there may be some Dark Ages ahead.
 
Last edited:

Romantic Heretic

uncoerced
Super Member
Registered
Joined
Jan 15, 2009
Messages
2,624
Reaction score
354
Website
www.romantic-heretic.com
Seeing that after millennia of philosophy and a century of modern neuroscience we still can't effectively define self-awareness, let alone understand how the brain produces it, I'm skeptical there's much chance of self-aware AI hitting the shelves any time soon.

How can something be faked when no one knows what it is? ;)