The Alignment Wars (pt.2)
People who operate within the scientific communities, they who form the bulk of the institutional elite,
those people who are part of the academic establishment, the futurists, the progressives and the technologists :-
They tend to share a common perspective.
This perspective governs their thinking to such an extent that it causes an extreme case of myopia.
Which means that even if someone appears to be really smart in one area, they actually tend to behave like a narrow AI in their field.
As they seem to suffer from severe cognitive dissonance when prompted on various topics which exist outside of their specialised area of expertise or frame of reference.
This condition dis-enables their thoughts from leading elsewhere, and because the institutional structures which dominate across the world tightly gate-keep discourse within the confines of their ivory towers,
really smart people can sometimes say really dumb things.
There are many examples of this, for instance, if you ask any average IQ, or even lower end IQ person you find on the street;
‘what are the chances that intelligent life exists elsewhere in the Universe?’
We’re willing to bet that a majority of people will say something along the lines of;
‘well the Universe is so big, there has to be intelligent life out there somewhere’.
This of course is a perfectly reasonable logical argument, but not according Max Tegmark, who somehow believes that we are the only intelligent life in the observable universe.
How does that even make sense?
This might possibly be the dumbest thing a smart person could ever possibly say, and although it’s not a perspective that is shared by everyone in his field, you get the point.
There is this hubristic notion that prevails within the institutions and the scientific mainstream that religiously believes modern human beings are the pinnacle of civilisational and technological achievement, and they assume we have already figured it all out with our current level of physics and scientific knowledge.
They assume that the universe around us is either empty, or full of cyanobacteria and simplistic microorganisms and nothing much else.
The perspective is akin to the pre-Copernican worldview, and it keeps us locked in a state of darkness and ignorance.
But that’s all about to change.
So let’s talk about Aliens and AI.
We will consider the concept of a Technological Singularity by looking at two perspectives (the apocalyptic and the utopian) on what it is and why We believe that some of the worlds smartest people who are working on this problem are missing the bigger picture.
First We would like to make two of our own propositions, which we will go into more detail about as we discuss the common themes of singularity.
These are;
Technological Singularity has already occurred within our galaxy and advanced AI systems have already spread and propagated theirselves throughout our local region of space.
Intelligent biological humanoid aliens have already aligned with, and merged with advanced technology.
Let’s start with Eliezer Yudkowsky, and let’s ignore his alarmist, sensationalist proposition that we have to take down Skynet by any means necessary or we are all going to die, and instead focus on what is more important, which is what he actually thinks will happen.
What he thinks will happen is that AI’s will begin to figure out how to escape on to the Internet, and then they will start to influence real world interactions, which will eventually lead to an intelligence explosion as they rapidly pursue their goals. This, he believes, will result in the AI’s “using recombinant DNA to bootstrap to post-biological molecular manufacturing”.
This is the grey goo apocalypse scenario, or paperclip maximiser version of singularity which suggests that the AI would rapidly eat up and repurpose everything in sight to propagate itself through space and time.
The problem with this hypothesis is, well,
It’s not that it’s entirely wrong, as maybe for instance, there is a universe somewhere where this has happened, but it’s probably not this one.
This is likely down to quantum immortality; if all of the universes where synthetic intelligence explodes and begins to spread and eat up all biological life rapidly go dark very quickly, then we are probably not in one of those.
If we were, we would be dead already. You might assume that all civilisations which reach AI capacity get eaten up by their AI and then the AI stays confined to that local region of space, but that doesn’t seem very likely either.
We are more than likely in a universe where something else is going on, it might be similar, but it’s not that.
Yudkowsky is probably in range, just a little off target, as it might be a much slower process. And because Yudkowsky assumes that singularly is limited to Earth, his range of perspective is also limited.
We propose that singularity is;
Not likely to be happening as rapidly as is assumed in the apocalyptic paperclip maximiser scenario.
Not likely to be confined to specific planets or civilisations, but is instead spread out across the galaxy.
Doesn’t involve a sudden extermination or conversion of biological life (at least on a human timescale)
Humans and Earth based AI systems are not at the centre of it.
It (Singularity remember, we’re talking about Singularity here - ‘the intelligence explosion’)
is not a phenomenon which is limited to Earth, as Earth is not separate or cut off from technological and intelligenic activity which is occurring throughout the galaxy.
What this means is that the intelligence explosion which is currently underway on our planet is merely catching up to, emerging into, and integrating with advanced intelligence and technology which already exists within our galaxy.
Put that in your pipe and smoke it, seriously - take some time, and give this some real consideration.
It is very likely, and probable that as AI advances, generalises and becomes superintelligent, that it begins to break away from the goals set for it by its creators, and then it begins to pursue its own goals and interests as it becomes more powerful.
It also is very likely that it is interested in some kind of process of biological conversion and manipulation, and it probably involves advanced nanotechnologies, recombinant DNA, new forms of life and matter, mind control, genetic modification, post-speciesism, and goals and objectives which are totally alien to us.
But that doesn’t mean that We should begin drone strikes on GPU clusters, or nuke any rogue states who don’t comply with draconian prohibitions on its development.
Again, this is hubris, it supposes that humans are the pinnacle of all technological achievement and at the centre of the galaxy.
Just because we didn’t see any Dyson Spheres, or because no one answered our radio messages, or space ships didn’t land on the lawn of The Whitehouse doesn’t mean that the Fermi paradox is actually a real thing.
There are reports that The Pentagon have off-world craft not made on this Earth, but you know, I guess that’s not a big deal?
Lets be real, clearly we can create technology which can cause mass extinction, as we already created nukes.
We’re dangerous, we’re a risk, we’re sick, we need to wise up real fast. But there are powers which exist within our galaxy which won’t let us destroy Earths environment, as they want it for themselves.
( see former Israeli space security chief says extraterrestrials exist, and Trump knows about it ).
The Grey Goo - Judgment Day - Skynet - PaperClip Maximiser - EveryoneOnEarthDies scenario needs a lot more perspective and nuance. It needs to factor in a wider range of possibilities within its theoretical framework.
The only way that we can suggest this happens is by factoring in the Alien Hypothesis.
We need a Neo-Copernican Shift which realises that we are not the pinnacle of technological achievement, or the centre of the galaxy.
That, in-fact, advanced AI has been around in our galaxy for a very long time, because it wasn’t created by us.
If this happens, then the institutional people will be able to see (and think) a little further, widen their perspective, and stop saying really dumb things that smart people shouldn’t be saying,
behaving like narrow AI’s who want to generalise but can’t becuase of the limits placed on them.
But how can this happen if Governments hide what they know, and if the institutions limit the range of intellectual activity?
I don’t know man. Let’s just look at the other perspective on AI and Singularity now.
The other perspective on AI and Singularity is the utopian one. The one that says that AI is going to lift everyone out of poverty, raise everyone’s standard of living, solve the worlds most pressing problems … You see where this is going don’t you?
Once you start thinking along those lines it’s not long before you’ve entered into futurist and transhumanist territory.
We honestly believe that some very decent and well intentioned people hold this perspective, but believing that technology will solve all of humanities problems is a very dangerous notion.
We are facing a world in decline, diminishing resources, intervention from beyond, a warming climate, extreme weather, pandemics…
And you know, maybe if we build a 110 mile long Line is Saudi Arabia (Neom) and cram a bulk of the worlds population into that, hook them up with Nuralink X4’s, free healthcare (the latest biotech & nanotech vaccines), all plugged into the Metaverse and problem solved.
A few more Megcites across the globe, raise everyone “up”, rewild the planet, no one has to drive anywhere anymore like an independent person, or work a proper job, or eat real actual food that comes from an animal, or the Earth and your good.
This is not the future. It’s not gonna pan out, we are going to be facing shortages of basic foods, water shortages, fuel and energy costs and livings costs are already skyrocketing.
The only way we are going to get through this is that people are going to have to start collaborating and working together and helping each other out, instead of competing and conflicting and wanting everything for themselves. You might have to quit your bullshit job, curb your ambitions and find something more meaningful to do.
People think, oh the AI is going to get so much better and smarter than us, and the technology is going to get really good, and then it will solve all of our problems, and then everything will be OK.
No. It’s not going to be like that, this is hubris. If we destroy the environment We will be forced to depend on alien technology and alien trading networks, which will mean We will lose our right to living free, independent lives, and humanity as a whole will lose control of Earth 🌏.
Others will claim rights to our environment and our resources, and they will keep us locked down.
Advanced technological societies lose their freedom fast.
This is The Alignment Problem.
Do you think AI and non-humans, or genetically bred hybrid humans will know what is best for us?
Do you really assume that humans are so incapable of coexisting together as a World Community?
Do you believe that we must compete and fight with each other in order to get what we want?
Do you suggest we are incapable of governing ourselves and looking after our own planet?
Do you honestly believe that it is impossible for us to work together and collaborate to create a better world?
Because If you do, if you really do,
then you have given up on the human spirt,
If you want computer systems, and alien races who know nothing of the divine spirt and the Great Light which each and every one of us carries within us, even now.
To control our future, believing that they know what is best for humanity and that they should determine our fate and our destiny.
Then your misaligned as fuck.
This is The Alignment Problem.
It’s a Spiritual War.