15 Comments

I am as anti-doomist as they come. In fact my current book in progress, tentatively titled AI and Cosmic Evolution, began as a reaction to the March moratorium open letter. That got me I started down the rabbit hole of Longtermism, p(doom) and all the rest, and thanks to Google, Torres' work was the first I read. His essay here claiming that Longtermists and Accelerationists are actually very similar is pretty thought provoking and I kind of agree. However, despite my strong AI-philia and space expansionism, I don't want to call myself e/acc because it sounds to much like a sectarian ideology, and as I hadn't previously heard of Jeremy England or Guillaume Verdon it’d be pretty silly to join their movement.

My own position is probably very similar to that of Jürgen Schmidhuber regarding superintelligent AI. Riffing off Teilhard de Chardin, you could say that the topic of my book is “The Phenomenon of AI”. I also reference ideas from Orion’s Arm that Anders Sandberg and myself developed way back in 2000. In describing the emergence of superintelligent AGI, or what in Orion’s Arm we called Hyperturings (I also distinguish between Hyperturings and Archailects), I consider the history of the cluster of ideas around transhumanism, extropianism, singularitarianism, and Russian cosmism (similar to Torres’ “bundle”), as well as science fiction, space exploration, and both scientific evolutionary cosmology and Big History (Sagan, Jantsch, Chaisson…), metaphysical evolutionary panentheism (Aurobindo, Teilhard), space expansionism, and futurism. All these elements converge in an AI-centric (rather than anthropocentric) cosmic evolutionary paradigm, in which baseline ("legacy human") and posthumans are part of a multispecies ecology.

I was interested to read in Torres’ essay that not all Longtermists are Hard Doomers, so I should probably distinguish Doomerism from Longtermism. Torres himself is an anti-natalist, and while I don’t go that far, overpopulation is certainly one of the topics I address. Billions of humans are fine, as long as there’s enough space habitats (O’Neill Cylinders or equivalent) to support them. If you have a thousand O’Neill Cylinders in L4 and L5, they could hold several billion in current Western standards of living, and along with negative population growth including Narrow AI robot companions for the elderly, would allow the planet to be rewilded after a few centuries, even bringing back extinct species like the Thylacine and the mammoth, with a small number of ecologists and nature lovers living planet-side in sustainable arcologies. Because I understand how ecology works, I follow a hard-line environmentalist policy and a Deep Ecology ethos, rather than that of naive leftist and conservative ideals, alienated as they are from nature and the biosphere. Torres from what I gather seems to have a quite similar appreciation for nature to me.

It's a shame Bezos couldn't get his rockets happening, because not only would a space race between two billionaires be awesome, but he seems to understand the necessity of space habitats and of moving heavy industry off world much better than Musk, who is excessively focussed on the currently too hard policy of colonising Mars. Mars will only be viable if, first, you have atomic rockets (see Winchell Chung's site), and second, you can generate an artificial planetary magnetic field (both certainly be within the capability of late 21st century technology) but the cloud tops of Venus would.

Expand full comment

Last weekend, Torres along with myself, Natasha, and Alexander Thomas discussed transhumanism and related ideas including effective accelerationism. It would be good if you could add a link to that discussion once it's edited and online.

It's true that I was able to agree with Torres on that one point -- that Verdon is incorrect in his view on transhumanism and the body/uploading. I have several essays on my own blog about the relationship between transhumanism and the physical world/embodiment/the senses.

Expand full comment