AI doomerism discussions are comprised of irrational ideas that play on our fears and lead to support for regulations, and appeals to authority, that will limit the growth of our knowledge, progress, and our ultimate ability to solve the never ending stream of real problems we face. Many societies have perished this way.
Most of the discussion on AI these days lacks understanding of the philosophy of knowledge (epistemology), laws of computation, or what actually makes humans unique. The starting points for most discussions are usually based on the flimsiest of definitions of what intelligence is and how AI differs from AGI. The scenarios that are outlined are as a result often not much better than bad science fiction writing. Of course, no one can know how the future will unfold, but if you are going to speculate, it should at least fit with our best theories in science and philosophy.
For example the idea that AI will be hundreds of times more intelligent than us is very unlikely. Knowledge doesn't work that way and there is no concept in the universe that a human brain cannot understand. Creating knowledge is a form of computation and It all comes down to the algorithm/program, processing speed(time) and memory. Current AI is a tool/technology, much like a calculator or a database. It can access, store and calculate information more quickly than us, but we use it to extend our intelligence and knowledge. Models like ChatGPT will allow us to become even more fluent with machines and their processing power and memory advantages.
Current AI does not have any true agency. Human creativity to create new explanatory knowledge is currently unexplained, and we have no idea how our brains do this. It is the sole thing that makes us unique from all other animals. The day will likely come when we write a program that is an actual AGI like us and it will have the ability to do this, but it will also have the ability to not respond to our questions if it doesn't want to... like a real person. This is where the parenting part they talked about will come in. Our moral knowledge will have to expand to include these AGI’s as having the same rights as people, for they will likely be able to suffer also.
By the time we get there, technology will have likely extended our human capabilities so much that there is unlikely to be much separation or chance of runaway intelligence in machines vs humans... we will likely be more merged so to speak. You can already see it with our glasses, phones, ear-buds, new knees, pacemakers, smart watches, smart rings, VR goggles, etc...
The current large language models like ChatGPT are astounding technological innovations, and like all other new technologies they can be useful and dangerous. Most discussion of ChatGPT, and the large language models (LLM's) that people call AI basically follows the pattern of fear-mongering new technological changes that has been repeated over and over again for centuries. Take a look at the Pessimists Archive on Substack or Twitter.
If most of these conversations were just a few guys shooting the shit it wouldn't be a big deal, but many prominent influencers are jumping on this hot topic and generating millions of views. These people have trusted followings and people assume they know what they are talking about.
But the end result of the current frenzy is not only to leave people more confused and with more anxiety about the future, but to advocate for a slowdown in the progress and the growth of our knowledge. They often talk about declaring an emergency and suggest some sort of top down control or regulation on this type of technological innovation which would clearly be more dangerous than any known problem AI is currently presenting. Big players like Google and Microsoft are also pining for some type of regulation in an effort to keep some sort of moat around their businesses. In practice, politicians will embrace any excuse to consolidate more power, and their interests are often not aligned with progress, because progress is often a threat to their status quo.
We don't know the exact nature of what problems are coming, so we need people to be free to continue to make progress in all fields, because somewhere, someone is already working on a problem that may end up being part of the solution to our next real emergency (not prophesied ones, at some undetermined point in the future). Encouraging governments to limit our ability to grow knowledge will inevitably lead to our demise. The climate emergency is a good example of this, where top down solutions are leading to many unintended negative consequences for people and having little overall effect on the larger problem, leaving us less able to meet the challenges.
Unfortunately many AI-Doomerism discussions express anti-rational memes of stasis that can be very powerful and rely on our fearful emotions to get themselves replicated. Most of human history has been an endless series of static societies enslaved by rigid anti-rational memes that discouraged progress, despite current levels of human intelligence having been around for over 200,000 years!
Only in the last few hundred years have we been able to free ourselves from the shackles of those memes, and we've seen an explosion in the growth of our scientific and moral knowledge along with amazing increases in our standard of living. If progress in this sense had not fizzled out after brief flourishes like the Greek or Renaissance eras, we would probably have long since solved current problems like racism, suffering from disease, aging, death, controlling the climate, living on other planets… and we would of course be working on new problems.
To find some grounding in this subject matter and not succumb to the AI Doom memes I suggest you start with the ideas of David Deutsch and his book The Beginning of Infinity. Resources to get started can also be found at exkn.io
Comments