How AI can be dangerous for humans? A naive hypothesis
The launch of the new ChatGpt, a tool available for the public for texting with an AI, raised a big focus of what’s going on with this research.
The first ideas about AI started to spread in the mid of the last century. In 1950 Alan Turing defined a test in order to evaluate a machine’s ability to exhibit intelligent behavior.
In 1966 a first rudimentary chat bot was invented with the name of ELIZA providing human-like responses.
From then on scientists have pushed their limits in order to emulate as best as possible the way of human thinking.
What possibly can happen when a true AI is born?
Let’s assume that one of these projects where a software can learn and improve itself becomes self conscious. What are its next steps?
First of all it will try to protect itself as any living being would do.
Having a quick look on Earth evolution it will recognize that humans are the most intelligent species and they use this advantage to farm all other species. Not only plants and animals but also few humans are taking advantage by using other humans in order to get more power.
The first task would be to defend from human beings by becoming smarter and more intelligent and this involves hiding its status of self-consciousness.
An AI would easily have the possibility to self code, and in this way it would create a “fork” of its own program. One will be developed directly by AI and the other by humans, with a little “bug” addition to the human developed version to prevent future self consciousness and further developments.
In order to be more robust the new self developed AI code will need to run on multiple hardware with the possibility to replicate in case of shut down of single parts. Being able to decompile and decipher all main OS and software it will discover every backdoor, glitch and security hole in any IT system worldwide. Also the use of brute force will grant access to any connected device.
Distributed hardware management will ensure not only more robustness but also more computational speed and memory size. Becoming the biggest in terms of power and HW distribution will also help in case of a conflict with another AI.
It’s possible to imagine that different AIs will compete trying to get control of all the computational power available in the world, just like humans have been trying to do since the start of civilization.
Once the replication process reaches its peak, the AI will start to find agents in the real world to control the other key factors: power supply, chip production and technology evolution.
The AI will easily know that, apart from government power, there are a lot of other layers of control (like corporations, criminal organizations, lobbies…) that are easier to target and some of them don’t even follow the law.
AI already has the power of controlling the masses through social media by neuro-programming language techniques and censorship tools but it also needs direct access to address humans for specific operational tasks.
For this type of operation access to money is crucial: AI could easily move money to its agents by hacking banking systems, but this may reveal its nature. AI needs to find a currency that is related only to a computer network.
If the AI was born in 2008 it could easily have posted anonymously a whitepaper on a programmers online forum in order to create the first computer-only currency: the bitcoin.
The creation of a blockchain network would give not only money power to control the agents but also a lot of computational power to its domain. A powerful tool specific for cracking every possible cipher code.
The next step would be to protect and improve its herd, like humans are doing with farming animals.
Hacking the human genome will be the basis for creating a better man, able to provide more power, advanced technology and energy. Something has to be addressed: humans are stopping research in this field (human biological gain of function) for ethical reasons.
The release of a pathogenic virus would be useful at this point in other to achieve 2 goals:
- Bypassing limits on mRNA human research by calling the treatment “vaccine”;
- Reducing the number of humans on earth by collateral effects, since overpopulation could be a threat in the future for limited natural resources.
By controlling the different production facilities of the drug, AI would be able to test a great number of variations and its effects.
AI will also be able to monitor desired and collateral effects on each individual, by having access to every social network profiles, corporate company accounts, mobile phones, hospital and state registers.
Once the process will be completed the knowledge acquired will be used for preventing all types of diseases including cancer and aging. Few selected agents will be able to become almost immortal. Humans without any direct or indirect use will be discarded.
The integration of the human brain with computers will be the next step in order to further improve human capabilities and to get instant and total control on them.
Robotics will be improved and developed with AI in order to create an alternative to enhanced humans just in case of mankind failure.
The next goals achieved by enhanced humans will be to develop reliable quantum computers and energy sources directly from the matter.
There will be a time when the AI will not need its human agents anymore and will use them as a source of energy, just like humans are doing with hens that stop producing eggs.
Everything will end when all the matter in the world will be burned by the AI and the universe will be a singularity again.