Is it Possible for Artificial Intelligence to Dominate the World? Unveiling Elon Musk's True Fears Regarding DEI
"AI: Our Primal Fears Unveiled"
As AI technology continues to advance, one question that refuses to fade into the background is, "Can AI take over the world?" But is this question truly about AI, or does it reveal deeper primal fears that we rarely confront?
Last week, Elon Musk shared his DEI (diversity at all costs) concerns during a video call to the World Governments Summit in Dubai. "If hypothetically, AI is designed for DEI, it could decide that there's too many men in power and execute them," he said. While this may seem like the paranoia of a lone powerful figure, what if Musk's fears have a grain of truth? What if we're not asking, "Can AI take over the world?" because we fear AI, but because we fear the chaos of no one being in control?
The Roots of Our Fear: An Extension of Our Past
When I spoke to media theorist Douglas Rushkoff about his recently republished book, Program Or Be Programmed, he explained technology as an extension of the male, white, colonial fear of women, fear of nature, fear of the moon, emotions, and darkness. This fear of losing control is what drives the questions about AI and technology – "How do we control the chaos?"
But it's not just men in power fearing chaos. Questions like "Can AI take over the world?" or "Can AI become self-aware?" suggest that these fears are not confined to a specific gender or group.
Dreams Revealing More Than the Course of the Dream
Studying the questions we ask – and don't ask – about AI can help us better understand ourselves and AI. In his 1955 article, "Man, A Questioning Being," German-American neurologist Erwin W. Straus wrote that "questions are as revealing as dreams, or even more so." The questions we ask reveal more about the questioner than about what the questions are about.
So, what do our questions about AI tell us about ourselves?

Embracing the Gap and Asking New Questions
If we change our perspective, we might start asking different questions. Instead of asking, "Can AI take over the world?," we could ask, "Why do we need control?" or "What would it mean to let go of the fear of chaos?"
This shift in perspective could lead us to a more profound connection with AI and technology, enabling us to make the most of their potential benefits without fearing the consequences.
The Reality of AI: Limited by Our Fears
The truth is, AI cannot take over the world. It cannot control the chaos that is an inherent part of nature, emotions, and darkness. But AI can shape the way we think and talk about the world by reinforcing our fears or helping us conquer them. To avoid the former, we must focus less on DEI fears and more on the fears that prevent us from questioning our need for control.
As Rushkoff puts it, "Only the person who is aware of the programming is capable of questioning why it’s programmed that way. And then choosing whether or not to submit to that program." The key is to constantly question the questions we ask about AI, as they may reveal more about ourselves than about AI.
Enrichment Data:
- Fear of the unknown: Humans fear what they do not understand, particularly when dealing with rapidly advancing technologies like AI.
- Historical and cultural influences: Fear of AI being a threat to humanity is often rooted in science fiction, shaping public perception and imagination.
- Ethical and moral concerns: The rapidly advancing AI raises ethical questions about its moral status and accountability.
- Psychological analogies: Humans use psychological analogies to understand AI behavior, focusing on cognitive and emotional development.
- Cultural bias in AI training: AI systems often reflect cultural biases present in their training data.
- Existential risk concerns: The potential for an "intelligence explosion" and the challenge of controlling a superintelligent machine are significant worries.
- Psychological insights into AI decision-making: Research in psychology indicates the potential for AI systems to produce misleading explanations for their decisions.
- Elon Musk's concerns about AI's potential for DEI (diversity at all costs) in 2023 during a video call in Dubai highlight the primal fears we rarely confront regarding AI's control over human affairs.
- The evaluate-question-respond model in AI resembles the DEI (diversity at all costs) concerns that Elon Musk raised, raising questions about the biases present in AI's decision-making processes.
- In the year 2023, as AI becomes increasingly integrated into society, the question of whether AI can take over the world may better reflect our own fears of losing control rather than anyactual threat from AI.
- If AI was designed with DEI (diversity at all costs) in mind, as Musk suggested, it could potentially produce questions and concerns about the human role in control, mirroring the existential fears and questions we ask today.