Member-only story
The End of Humanity? — The Fear of AI is the Beginning of Wisdom
The debate between AI experts about the potential of the technology to lead to human extinction has prompted the first episode of The Commentary. There are mixed reactions to the Statement on AI risk made by prominent experts and leaders of many top AI labs, including OpenAI, DeepMind and Anthropic.
There’s a group that believes such a statement emphasizes the importance of implementing appropriate safeguards to ensure that AI does not lead to human extinction. However, there is another camp of experts who argue that sensationalizing AI risk reeks of ‘doomer’ and ‘hero scientist’ narratives.
It’s fascinating to watch the divide between AI scientists; after all, science is supposed to be data-driven and evidenced-based. No pun intended, but for the mere fact that scientists who develop smart machines are championing the call to address potential existential threats makes one wonder, “Why make AI smarter if it can threaten human existence?”
Though both sides may have strong points, let’s dive deep into the topic that has been causing quite a stir in the world of artificial intelligence: the fear of AI and its potential to reach a state of singularity. You may have seen movies and read books where superintelligent machines threaten humanity, but is this fear justified?
First things first, what is singularity? In the context of AI, singularity refers to a hypothetical point in the future where AI becomes so advanced that it…