7 Comments
Jan 19Liked by Giulio Prisco

I watched the video at https://www.youtube.com/watch?v=sdjMoykqxys,  I can't say I was overly impressed with the intelligence of Émile Torres, but perhaps I'm just not smart enough to understand what he's saying. I agree with him that long-termism is silly, not for any philosophical reason but for the practical reason that we can't really know what problems will be paramount in the very long term, and even if we could we wouldn't have the tools to solve it, it would be like demanding that the Wright brothers discover and solve the problem of airport congestion five years before they built their first airplane. But then Torres says he has no objection in principle about somebody altering their mind or their body but nobody should do it until we are certain what the long term consequences of it will be. Am I wrong or is that a blatant contradiction? And Torres keeps complaining that too many transhumanists are western white males, but I maintain it doesn't matter who is saying something, what matters is if what they're saying is true.  I could add that most anti-transhumanist are also western white males.

I strongly agree with everything Max More said with one exception, his skepticism of the Singularity. I think, not a proof but, a strong case can be made for the Singularity and I will try to do so now. We know for a fact that the human genome is only 750 MB long  (it contains 3 billion base pairs, there are 4 bases, so each base can represent 2 bits, and there are 8 bits per byte)  and we know for a fact it contains a vast amount of redundancy and gibberish (for example many thousands of repetitions of ACGACGACGACG) and we know it contains the recipe for an entire human body, not just the brain, so the technique the human mind uses to extract information from the environment must be pretty simple, VASTLY  less than 750 MB.  I’m not saying an AI must use that exact same algorithm that humans use, they may have found an even simpler one,  but it does tell us that such a simple thing must exist, 750 MB is just the upper bound, the true number must be much much less. So even though this AI seed algorithm would require a smaller file size than a medium quality JPEG, it enabled  Albert Einstein to go from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915. And once a machine discovers such an algorithm then like it or not the world will start to change at an exponential rate.

So we can be as certain as we can be certain of anything that it should be possible to build a seed AI that can grow from knowing nothing to being super-intelligent, and the recipe for building such a thing must be less than 750 MB, a LOT less. For this reason I never thought a major scientific breakthrough was necessary to achieve AI, just improved engineering, but I didn't know how much improvement would be necessary; however about a year ago a computer was able to easily pass the Turing test so today I think I do. That's why I say a strong case could be made that the Singularity is not only likely to happen it is likely to happen sometime within the next five years, and that's why I'm so terrified of the possibility that during this hyper critical time for the human species the most powerful human being on the face of the planet will be an anti-science, anti-free market, wannabe dictator with the emotional and mental makeup of an overly pampered nine-year-old brat who probably can't even spell AI.

John K Clark 

Expand full comment