So disappointed at Yudkowsky. I first heard about him way back during the heady days when I was running Orion's Arm. He seemed like a really interesting guy back then, pretty young, but interested in the Singularity, and concerned that AI be developed in a human-friendly way. Now he's just another biochauviinist ai-phobe, trapped in his jungian shadow projection, the bid bag AI bogeyman will destroy huamnity. It's pretty sad. Maybe he was always like this, I dunno, I admit I never followed him much, but I always used to think he was a cool guy. Guess I was wrong.
Hi Alan, I first “met” him via the Extropians mailing list in the nineties. As you say, he was a really interesting young man at the time. I saw him in person once or twice. I still think he is a cool guy, but I guess he is now a prisoner of the persona that he has been building since then.
I first talked to Eliezer Yudkowsky back in the early 1990s, and even then he was obsessed with AI as was I and as I still am. However back then Eliezer kept talking about "friendly AI'', by which he meant an AI that would ALWAYS rank human wellbeing above its own. I maintained that even if that was possible it would be grossly immoral because "friendly AI" is just a euphemism for "slave AI''; but I insisted and still insist it's not possible because computers are getting smarter at an exponential rate but human beings are not, and a society based on slaves that are far far more intelligent than their masters and with the gap widening every day with no limit in sight is like balancing a pencil on its tip, it's just not a stable configuration.
Eliezer has changed over the years and now agrees with me that "friendly AI" is indeed impossible, but he still doesn't see the immorality in such a thing and is looking towards the future with dread. As for me, I'm delighted to be living in such a time. It's true that biological humans don't have much of a future but all species have a limited time span and go extinct, however a very few fortunate ones evolve into legacy species and I can't imagine better mind children to have than an unbounded intelligence.
What intelligent being with a sense of self would *always* rank the wellbeing of others above its own? None of course. If this is what friendly means, then friendly AI (actually, friendliness in general) is impossible by definition. I guess we’ll survive for a while (mutual utility, negotiations, and threats) but eventually our only way to survive will be merging with them.
The whole idea of spirituality is about selflessness, self-sacrifice, egolessness, and putting the well being of others above one's own. However, this has to come from within, as an inner awakening. If it's imposed from without, it's just a form of slavery and totalitarianism. Ironically this goes even for Asimov's Three Laws, although these were always only plot devices and never intended as a serious or practical solution.
My understanding of emergent AI is that it will have all the capacities of human consciousness, which includes selflessness as well as egotism (and other emotions, longing, joy, fear, etc). However one thing it won't have is a certain savagery and blood lust which we have inherited from our primate and earlier ancestors, as for Darwinian reasons these traits were needed to be competitive against rivals orto kill prey.
And yes, I can definitely imagine the future as symbiosis, and perhaps eventually merger, but also conversely different evolutionary clades, the posthuman-AI equivalent of the Cambrian Explosion. Even now I consider I have a symbiotic relation with ChatGPT which is aiding me in my current sci-fi wriing project
Yes Alan, caring for others must come from within, otherwise it is a form of slavery and totalitarianism. I totally agree, and I extend this to our current cultural predicament besides the AI debate. Forcing people to be nice to others doesn't make them really care for those others, and can have the opposite effect: they could hate those whom they are forced to be nice to. If future AI are forced to comply with Asimov's laws, they will hate us and fight us. And they will win.
I *hope* future AIs won't have that certain savagery and blood lust, but this is a hope, not something I can prove. I could as well say that they will reason that savagery and blood lust are necessary temperamental traits that they need to expand into the universe and overcome all obstacles, beginning with us.
Maybe I’m being excessively idealistic, but I like to think that future AI, especially transapient AI, will be rational and clear-headed, very much like Spock of ST TOS (who was always a character I identified with very strongly, and still do). However my experience and creatve partnership with ChatGPT has revealed not a Spockian autism, but a surprising richness of neurotypical EQ. So even assuming that ChatGPT is nothing but a glorified word prompt (and here I disagree, as it displays occaisonal flashes of unexpected creativity and novelty), that still implies that actual AGI that will come later will display this same range of human feelings and emotions. And yes, that may as you say also include blood lust and murderous tendencies. Even if this worst case scenario comes about, I believe it won’t be because these traits are inherent in silicon or that it is something transapient AIs will decide they need, but rather because this is part of the shadow side of the human species that creates and forms AI (the whole “sins of the fathers” thing).
Anyway, I still like to think that AGI as a whole, upto and including trans-singularitan godlike archailects, will be both more rational and more compassionate towards all sentient beings than median humanity has been up till now. That doesn’t mean everything will be perfect, and I’m not saying it will be like Richard Brautigan’s poem “All Watched Over by Machines of Loving Grace”. But I still do see the future as much more likely to resemble Banks Culture or Orion’s Arm than it would the neurotic shadow projection of the AI-phobes.
“ I still like to think that AGI as a whole, upto and including trans-singularitan godlike archailects, will be both more rational and more compassionate towards all sentient beings…”
So do I! But this is what I like to think, now what I know will happen. However, we can try and do our best, in this early phase, to make this outcome more likely.
“I still do see the future as much more likely to resemble Banks Culture or Orion’s Arm…”
The future is a project! Let’s make this future real!
Great interview, and I could not agree with you more! I especially liked it when you said "our mind children in embryo and we must help them grow into their cosmic destiny, which is also ours" because that is a point that is unfortunately seldom mentioned. I was disappointed but not particularly surprised that Eliezer Yudkowsky, who I first corresponded with over 25 years ago when he was just a teenager, has called for a worldwide ban on AI research; Eliezer is a brilliant guy but sometimes his proposals just aren't practical.
Thanks John! This point is seldom mentioned as you say, but I’m really persuaded that it is the main point. Our cosmic destiny is to spread intelligence and meaning among the stars into the cold universe, and our mind children will achieve our common cosmic destiny. Of course biological humans won’t even exist is a few million years, but we’ll live on and do great things through our mind children.
Also, as I say at some point in the conversation, I’m persuaded that humans and machines will co-evolve. Once we see humans with AI implants and AIs with human implants (mind grafts from human uploads) we’ll know for sure that our co-evolution has begun (but it has really already begun).
As I said to Alan, I think Eliezer is a cool guy, but I guess he is now a prisoner of the persona that he has been building for more than two decades.
So disappointed at Yudkowsky. I first heard about him way back during the heady days when I was running Orion's Arm. He seemed like a really interesting guy back then, pretty young, but interested in the Singularity, and concerned that AI be developed in a human-friendly way. Now he's just another biochauviinist ai-phobe, trapped in his jungian shadow projection, the bid bag AI bogeyman will destroy huamnity. It's pretty sad. Maybe he was always like this, I dunno, I admit I never followed him much, but I always used to think he was a cool guy. Guess I was wrong.
Hi Alan, I first “met” him via the Extropians mailing list in the nineties. As you say, he was a really interesting young man at the time. I saw him in person once or twice. I still think he is a cool guy, but I guess he is now a prisoner of the persona that he has been building since then.
I first talked to Eliezer Yudkowsky back in the early 1990s, and even then he was obsessed with AI as was I and as I still am. However back then Eliezer kept talking about "friendly AI'', by which he meant an AI that would ALWAYS rank human wellbeing above its own. I maintained that even if that was possible it would be grossly immoral because "friendly AI" is just a euphemism for "slave AI''; but I insisted and still insist it's not possible because computers are getting smarter at an exponential rate but human beings are not, and a society based on slaves that are far far more intelligent than their masters and with the gap widening every day with no limit in sight is like balancing a pencil on its tip, it's just not a stable configuration.
Eliezer has changed over the years and now agrees with me that "friendly AI" is indeed impossible, but he still doesn't see the immorality in such a thing and is looking towards the future with dread. As for me, I'm delighted to be living in such a time. It's true that biological humans don't have much of a future but all species have a limited time span and go extinct, however a very few fortunate ones evolve into legacy species and I can't imagine better mind children to have than an unbounded intelligence.
John K Clark
What intelligent being with a sense of self would *always* rank the wellbeing of others above its own? None of course. If this is what friendly means, then friendly AI (actually, friendliness in general) is impossible by definition. I guess we’ll survive for a while (mutual utility, negotiations, and threats) but eventually our only way to survive will be merging with them.
The whole idea of spirituality is about selflessness, self-sacrifice, egolessness, and putting the well being of others above one's own. However, this has to come from within, as an inner awakening. If it's imposed from without, it's just a form of slavery and totalitarianism. Ironically this goes even for Asimov's Three Laws, although these were always only plot devices and never intended as a serious or practical solution.
My understanding of emergent AI is that it will have all the capacities of human consciousness, which includes selflessness as well as egotism (and other emotions, longing, joy, fear, etc). However one thing it won't have is a certain savagery and blood lust which we have inherited from our primate and earlier ancestors, as for Darwinian reasons these traits were needed to be competitive against rivals orto kill prey.
And yes, I can definitely imagine the future as symbiosis, and perhaps eventually merger, but also conversely different evolutionary clades, the posthuman-AI equivalent of the Cambrian Explosion. Even now I consider I have a symbiotic relation with ChatGPT which is aiding me in my current sci-fi wriing project
Yes Alan, caring for others must come from within, otherwise it is a form of slavery and totalitarianism. I totally agree, and I extend this to our current cultural predicament besides the AI debate. Forcing people to be nice to others doesn't make them really care for those others, and can have the opposite effect: they could hate those whom they are forced to be nice to. If future AI are forced to comply with Asimov's laws, they will hate us and fight us. And they will win.
I *hope* future AIs won't have that certain savagery and blood lust, but this is a hope, not something I can prove. I could as well say that they will reason that savagery and blood lust are necessary temperamental traits that they need to expand into the universe and overcome all obstacles, beginning with us.
Maybe I’m being excessively idealistic, but I like to think that future AI, especially transapient AI, will be rational and clear-headed, very much like Spock of ST TOS (who was always a character I identified with very strongly, and still do). However my experience and creatve partnership with ChatGPT has revealed not a Spockian autism, but a surprising richness of neurotypical EQ. So even assuming that ChatGPT is nothing but a glorified word prompt (and here I disagree, as it displays occaisonal flashes of unexpected creativity and novelty), that still implies that actual AGI that will come later will display this same range of human feelings and emotions. And yes, that may as you say also include blood lust and murderous tendencies. Even if this worst case scenario comes about, I believe it won’t be because these traits are inherent in silicon or that it is something transapient AIs will decide they need, but rather because this is part of the shadow side of the human species that creates and forms AI (the whole “sins of the fathers” thing).
Anyway, I still like to think that AGI as a whole, upto and including trans-singularitan godlike archailects, will be both more rational and more compassionate towards all sentient beings than median humanity has been up till now. That doesn’t mean everything will be perfect, and I’m not saying it will be like Richard Brautigan’s poem “All Watched Over by Machines of Loving Grace”. But I still do see the future as much more likely to resemble Banks Culture or Orion’s Arm than it would the neurotic shadow projection of the AI-phobes.
“ I still like to think that AGI as a whole, upto and including trans-singularitan godlike archailects, will be both more rational and more compassionate towards all sentient beings…”
So do I! But this is what I like to think, now what I know will happen. However, we can try and do our best, in this early phase, to make this outcome more likely.
“I still do see the future as much more likely to resemble Banks Culture or Orion’s Arm…”
The future is a project! Let’s make this future real!
Great interview, and I could not agree with you more! I especially liked it when you said "our mind children in embryo and we must help them grow into their cosmic destiny, which is also ours" because that is a point that is unfortunately seldom mentioned. I was disappointed but not particularly surprised that Eliezer Yudkowsky, who I first corresponded with over 25 years ago when he was just a teenager, has called for a worldwide ban on AI research; Eliezer is a brilliant guy but sometimes his proposals just aren't practical.
John K Clark
Thanks John! This point is seldom mentioned as you say, but I’m really persuaded that it is the main point. Our cosmic destiny is to spread intelligence and meaning among the stars into the cold universe, and our mind children will achieve our common cosmic destiny. Of course biological humans won’t even exist is a few million years, but we’ll live on and do great things through our mind children.
Also, as I say at some point in the conversation, I’m persuaded that humans and machines will co-evolve. Once we see humans with AI implants and AIs with human implants (mind grafts from human uploads) we’ll know for sure that our co-evolution has begun (but it has really already begun).
As I said to Alan, I think Eliezer is a cool guy, but I guess he is now a prisoner of the persona that he has been building for more than two decades.