Elon Musk is Wrong: Why AI is a Red Herring

Computational and robotic technology is getting more advanced to the point notable people, including the entrepreneuer billionaire Elon Musk, are spreading the speculative idea that the advancement of artificial intelligence will accelerate so precipitously in the near future that machine learning could become more capable and sentient than humanity. Hypothetically it is after this point that AI, being more powerful than humankind, could either destroy or greatly help human society.

The popularization of this sci-fi-influenced story of computers overtaking the world is a result of computer science advancing more and more, the importance of the internet in civilization, and futuristic technology having important roles in the fiction of modern pop culture. But here we also have to examine if this AI-obsessed hypothesis is truly reasonable and any ideological biases embedded in the believers since there is great overlap between futurology and a sort of modern secular eschatology rooted in the paranoia that is humanity's fear of death: in this case a collective fear of death or, in more morbid cases, a collective suicidality moving toward pacification of the Pain (the Pain being the total suffering of all humanity).

ur boy elon

Elon Reeve Musk FRS (/ˈiːlɒn/ EE-lon; born June 28, 1971) is an entrepreneur and business magnate. He is the founder, CEO and Chief Engineer at SpaceX; early stage investor,[note 2] CEO and Product Architect of Tesla, Inc.; founder of The Boring Company; and co-founder of Neuralink and OpenAI. A centibillionaire, Musk is one of the richest people in the world.

Musk was born to a Canadian mother and South African father and raised in Pretoria, South Africa. He briefly attended the University of Pretoria before moving to Canada aged 17 to attend Queen's University. He transferred to the University of Pennsylvania two years later, where he received bachelor's degrees in economics and physics. He moved to California in 1995 to attend Stanford University but decided instead to pursue a business career, co-founding the web software company Zip2 with brother Kimbal. The startup was acquired by Compaq for $307 million in 1999. Musk co-founded online bank X.com that same year, which merged with Confinity in 2000 to form PayPal. The company was bought by eBay in 2002 for $1.5 billion.

In 2002, Musk founded SpaceX, an aerospace manufacturer and space transport services company, of which he is CEO and CTO. In 2004, he joined electric vehicle manufacturer Tesla Motors, Inc. (now Tesla, Inc.) as chairman and product architect, becoming its CEO in 2008. In 2006, he helped create SolarCity, a solar energy services company that was later acquired by Tesla and became Tesla Energy. In 2015, he co-founded OpenAI, a nonprofit research company that promotes friendly artificial intelligence. In 2016, he co-founded Neuralink, a neurotechnology company focused on developing brain–computer interfaces, and founded The Boring Company, a tunnel construction company. Musk has proposed the Hyperloop, a high-speed vactrain transportation system.

ur man wilber

Wilber was born in 1949 in Oklahoma City. In 1967 he enrolled as a pre-med student at Duke University.[3] He became interested in Eastern literature, particularly the Tao Te Ching. He left Duke and enrolled at the University of Nebraska at Lincoln, but after a few years dropped out of university and began studying his own curriculum and writing.[4] All Quadrants All Levels (AQAL, pron. "ah-qwul") is the basic framework of integral theory. It models human knowledge and experience with a four-quadrant grid, along the axes of "interior-exterior" and "individual-collective". According to Wilber, it is a comprehensive approach to reality, a metatheory that attempts to explain how academic disciplines and every form of knowledge and experience fit together coherently.[2]

AQAL is based on four fundamental concepts and a rest-category: four quadrants, several levels and lines of development, several states of consciousness, and "types", topics which do not fit into these four concepts.[15] "Levels" are the stages of development, from pre-personal through personal to transpersonal. "Lines" of development are various domains which may progress unevenly through different stages .[note 1] "States" are states of consciousness; according to Wilber persons may have a temporal experience of a higher developmental stage.[note 2] "Types" is a rest-category, for phenomena which do not fit in the other four concepts.[16] In order for an account of the Kosmos to be complete, Wilber believes that it must include each of these five categories. For Wilber, only such an account can be accurately called "integral". In the essay, "Excerpt C: The Ways We Are in This Together", Wilber describes AQAL as "one suggested architecture of the Kosmos".[17]

I want to make it clear that this is not meant to insult or degrade Elon Musk or his followers, rather to give them a viewpoint they may not have considered so they can re-evaluate their views and see if they come to a different understanding. I know Elon Musk has been a controversial figure. And zoomers, which I assume make up a massive amount of his fanbase, have a reputation for being stupid. But maybe zoomers actually are on to something with their Dogecoin schemes.

The argument against AI reaching human consciousness in the near future is simple and has been stated before: the human brain's level of complexity is too high to be replicated with our current computer tech, and it will take an excruciatingly long amount of time for anybody who wants sentient AI to get it. In fact, they'll be long dead by the time we get it. But despite the simple truth of this argument, many people cite accelerationist logic and the technological singularity hypothesis as legitimate possibilities that could allow for advanced AI arriving in the world soon, so the argument has to become more specific to demonstrate why AI cannot catch up in such a short spell of time.

The way Ken Wilber articulated why human consciousness cannot be downloaded to a machine in the near future was essentially just the complexity argument, although it was stated perfectly in a poetic way with the fact that uncountable billions of neurons in a system would be difficult to replicate when he alleges there are more neural connections in the neocortex alone than there are stars in the "known universe", especially using modern computer science techniques that don't seem to be advancing as fast as accelerationists believe. "Oh, wow, it got smaller." The amount of transistors and such will take time to catch up to neuron counts, and even so, neurons are more versatile in terms of logic whereas transistors are usually more one-dimensional in function design.

Machines are also susceptible to the flaws of humans and limits of what technological resources we have access to. For example, think of the way a human being thinks about people, locations, or other objects they have seen before. Their eyes intake the photons with a very specific acuity. The specific image "imprints" the brain, with the parts of the brain stimulated associating mostly with the memories of that thing. This can generate linguistic logical information that is associated with the visual associations and can be used to conceptualize about the object being seen and therefore make decisions based on which of the associated hypotheticals associates most with dopamine release, which causes the "association" between the mind's decision and the body's action. It's all connections based on how neurons structure after billions of years of effective evolution, whereas humans are having a harder time developing this because humans are slow and have to do things like interrupt their work in order to eat, sleep, etc in a way where simply designing all that in addition to the countless other functions the brain has would make this task take ridiculously longer than the near future. If it to happen at all, the most efficient method will be humans just reflecting exactly the sorts of systems seen in the brain, without knowing what actually's going on (the same way the Youtube algorithm is a mystery even to Youtube or how a chess engine designer does not have access to all the decisions of a chess engine). The problem here is that neuroscience and the mapping of the brain's mechanisms themselves are very slow.

It's like a metaphor I used in another blog post, the one about intelligence.

The basic theoretical mechanics of an engine are just energy pushing an object, but there are more complicated structural variances like multiple pathways to maximize energetic efficiency and coolant mechanisms to prevent overheating and injector mechanisms in spaceships for everything to be done in proper ratios and angles, etc.

I also question why someone would want to live among these hypothetical machines anyway. Sure, perhaps machines can be more efficient at solving certain problems, but highly intelligent people can already solve the problems that actually matter. A computer can solve tiresome mathematics like the square root of (19191919191+737373737337)/2+9(282828) in no time, whereas even someone like Einstein or anyone short of some walking calculator mental math prodigy would not compare. But when it comes to the stuff that actually counts, AI would say the same things as any reasonable person. If running an accurate simulation, the machine learning would see the way to end world poverty and war and famine and pestilence is for all the humans to converge together slowly over time into oneness. If that search result was "banned" from the AI questioning for "practical" capitalistic purposes, the AI could find other ways to sneak around the stupid owners. But perhaps the owners could game the AI to not be targets, the same way Timothy Leary was able to be sent to minimum security prison and escape because he had helped design the prison psych eval at Harvard and so knew which specific answers caused him to be considered the most stable and low risk. Genetically perfected humans could probably serve the same purpose as AI, and they would be easier to manufacture, even if I have a sense there's something very, very wrong with how that could be used: if all differences and "realness" between humans were erased. This is basic instinct, but Plato stated it in the philosophical language through saying we should not believe "the world was made in the likeness of any Idea that is merely partial" because "nothing incomplete is beautiful. We must suppose rather that it is the perfect image of the whole of which all animals--both individuals and species--are parts."

Fear-mongering about AI does not make sense even if it arrives. Just follow basic safety precautions like don't give the AI a powerful physical body, don't give it access to nukes, give it a pre-built cost function for the decision of attempting to destroy the human race, monitor stuff it decides to manufacture, etc.

But all this effort could be futile anyway, when it comes to replicating advanced consciousness.

Wilber makes the point that if you download information from the human brain into a hard drive in a way that is dissociated and altered from how it is within the nature of the brain, the consciousness will be dissociated and altered from its original nature as well. This principle is related to how Wilber and the philosophers that influenced him consider holonic understanding to be important. You cannot replicate a molecule without proper atoms, and you cannot truly replicate a human consciousness without the natural components. In Sex, Ecology, Spirituality: The Spirit of Evolution, Wilber lists examples of holonic systems as part of his integrative AQAL model of philosophical cosmology where more complex systems arise from simpler systems or structures. For example, the material in a galaxy can coalesce into planet (the opposite of entropy), planets can foster a "Gaia system" or a biosphere, heterotrophic ecosystems can arise after that, social societies of sapient beings with division of labor can arise after that, families and tribes can form, early states can form, empires can form, and eventually planetary systems with more social and physical complexity can form. It is this running up causative pattern of simplicity to complexity that correlates with how you cannot have an exact whole without its exact parts (and additionally it is easier for a complex thing to come from a slightly less complex thing, rather than produce intense complexity from very mere simplicity, which would be like a massive jump from human-made computers to sentient minds).

I will explain this principle and use a visual formulation of how AI designers could work around it.

There is debate among philosophers about whether or not human consciousness can be reduced to the human brain, meaning the human brain and consciousness are identical or that the consciousness is caused by the brain. That is the view our current culture holds. There is also a philosophical view phenomenalism or idealism or mind-based monism that states consciousness is reality and all physical objects, including the brain, are caused by it. No matter which one of these views is correct, there is the pattern that consciousness and the brain correlate. The brain and the mind change together.

The human brain (henceforth "HB") and human consciousness (henceforth "HC") correlate perfectly. The cells of HB and the qualia of HC share something. When you change the object, the consciousness also changes. So, if you change the HB to a certain degree, you no longer have the HB anymore since a demarcation into a new type of object has been crossed. You may now have something that fits the definition of a hard drive for example. Since you no longer have the HB, you no longer have the HC. You have whatever "consciousness" correlates with the hard drive. As the AI gets more complex though, the AI consciousness will too, but the more humanlike it gets, the closer to a brain it gets since a brain is what correlates with HC. A brain is what we are working toward when we move toward this hypothetical AI, so we already have what we are after. An AI more complex than HB is likely biological since biological computers (brains and other nervous system parts, DNA, etc.) don't have to rely on limited binary code and can physically twist and morph in ways that circuits can't (neurogenesis and neuroplasticity for example). So the field that might reach this AI faster is probably not computing but rather genetic engineering, unless quantum computing allows for some unlikely computing technology that is more powerful than biology. Here is a visual analogy:

Human consciousness naturally correlating with brain, phenomenology and brain chemistry match.

Black = human brain

Red = human consciousness within human brain

Overlap = correlation

Position = nature


Artificial consciousness does not exactly match up with structure of human consciousness because the black physicality it correlates with does not correlate with the black structure of the brain.

External energy is used to correct the projection of the artificial physical structure to force it to match up with the experience of a human.

The hypothetical post-human AI ties in with simulation theory, which is the idea machines can replicate human experience exactly like a brain can. The argument I made above was that this is impossible because the correlates of HB and HC just wouldn't work if HB was not present, but there may be a way for this AI and/or simulation consciousness to work.

There are some philosophers who think of consciousness as a projection of the brain that exists physically somewhere. Now use this fantastical analogy. If you throw a ball into a portal and a god stops the ball and holds it, the ball is now moving at 0mph. If you fire a bullet into a portal and the god stops the bullet and holds it, it is traveling at 0mph. If a human brain projects a consciousness, some external force could cause it to end up the way it is. If manipulated properly, that external force could end up being used to manipulate AI consciousness to be more like human consciousness even though the physical environments of AI and human brains are very different and would seemingly have very different effects/correlations. This external force manipulation may be a part of very futuristic neuroscience.

Now, it seems ridiculous to me that we could discover this technology anytime soon, but accelerationists gonna accelerationist. The rates of progress still just don't add up. We are nowhere close to building a brain in a vat. We are nowhere close to discovering some alternate dimension where when can manipulate consciousness after it's been "projected" by the brain. It is not going to happen until long after we are dead.

It should also be noted that obsession with technology and AI and Musk's industry is a sort of capitalist aesthetic or theme, the same way technological ingenuity and advancement was a theme of the artistic fascist Futurism movement in Europe. We should be wary of this.

I hinted Elon Musk's focus on AI was a red herring, but what exactly is it distracting from? I and many others have the intuition that something big is coming. I don't know what it is. Another more deadly pandemic? Alien contact? I don't think so. Those just seem too obvious... (hint: it's climate change, more on this in another blog post)

Humanity and specifically people with a focus on the end times, whether it be God bringing judgement day or AI killing everyone, have a focus on death because negative emotional states are addictive because they bring variety, novelty. And ironically paranoia gives the illusion of us feeling safer. Fear is rampant. People have fear of the virus spreading around because it kills. People have fear the virus won't kill them because it's a hoax. Either way, fear is important because fear motivates. But what happens when fear ends? Does the fearless person end because they aren't afraid of death? NO, A PERSON WHO IS FEARLESS ENDS THE GAME. "Until the fearless come and the act is done, a love like blood, a love like blood." --One of my favorite song lyrics of all time from a very eschatological occult-tinged album about the darkness of society.