AGI
Artificial General Intelligence (AGI) looms on the horizon as an ever-approaching specter, evoking fear and dystopian prophecies.
For many, AGI seems to be defined loosely as a "strong AI" which is a technological breakthrough that is dangerous enough to be a genuine threat to human survival. However, many potential technological breakthroughs can have this property and looking at AGI this way is essentially reducing it down to some sort of dangerous weapon or uncontrollable software virus.
While this is an important risk that is understandable for intellectuals and practitioners in the field to be anxiously focused on, it is not the most interesting question.
The more interesting question is how and when we will develop the algorithms that become true AGIs, which are conscious in some similar way to us and capable of creating and recognizing new explanatory knowledge (with universal reach) in the same way we do? At that point, we will have essentially given birth to new children who, from a moral-philosophical point of view, can be no different than us in essence - universal explainers and constructors.
The current debate over when we get there is largely premised on assumptions that our current Bayesian approach to AI development will suffice as speed and processing power continue to grow. However, it is likely that a philosophically different approach will be needed to make real progress in this area, as we don't actually yet understand how creativity in the human mind works. We may be able to create hardware similar to our brain soon, but we don't have the right software or algorithms yet... or else we would already have AGI (it would just be slow). The intriguing part is that we know the evolutionary process was able to endow us with it, so with the right explanation we should also be able to discover that knowledge.
Eventually, we can get there; (and maybe sooner rather than later?), but how we do and the explanations uncovered will be among the most fascinating and revelatory discoveries in history.