AI: Where the Only Thing to Fear is Fear Itself
In today’s world, the majority of headlines I see around Artificial Intelligence (AI) skew negative. Fueled by decades of science fiction projecting AI to be a future antagonist, and present day attention-grabbing headlines, many view AI as the ominous cloud on the horizon of human evolution. As something that will overtake us, and having done so, will then wipe us out.
This sentiment brings to mind the profound words of Franklin D. Roosevelt: “The only thing we have to fear is fear itself.”
Much of the work done in AI is a major and encouraging pinnacle of human ingenuity, representing our innate desire to break barriers and redefine the possible. With it, we have been able to break through computational barriers, building applications the likes of which were previously thought impossible. And we’re only getting started.
Yet, like any revolutionary advancement, the right hand of power can bring the left hand of darkness, in this case, that of uncertainty and apprehension. But is this fear rooted in the technology itself or our misunderstanding of it? I argue the latter.
Movies, novels, and popular culture have, for decades, painted AI as the unstoppable force that will eventually surpass human intelligence, casting a shadow over our dominion. These depictions are, of course, tailored for entertainment, emphasizing conflict and drama over factual representation. Yet, the impact of these narratives on public perception is undeniable.
But one thing stands clear. Intelligence alone is not enough, be it artificial or biological. A living, conscious, sentient entity, capable of making value judgements – for good or evil – is a lot more than just an intelligence. Yet the column inches of fear focus entirely on the ‘I’
Take Bostrom’s famous paper clip thought experiment In this, an AI is tasked with making paperclips, and, given that it can learn how to make better paperclips, it will eventually realize that all of humanity’s resources are effectively wasted if they’re not being used to make paperclips, and it will run away, and destroy humanity in an apocalypse of paperclips.
The fallacy here, of course, is that in the discussion of the possible, we’re ignoring the probable, and the largest factor determining probability of this outcome would be giving this paper clip generator access to the resources that would allow it to destroy humanity. Maybe it could evolve a level of intelligence that would allow it to theorize a methodology to ‘break out’ of its thought experiment confines, but does that give it the ability to do so? And, when creating any kind of artificial intelligence, the model, while very sophisticated is usually very limited. It has a one-tracked mind, if you will, and this experiment pre-supposes that it can evolve beyond that.
The narrative is overtaken by fear of the possible.
It’s essential to understand that AI, at its core, is neutral. It’s a tool, an extension of our collective knowledge. AI doesn’t have emotions, ambitions, or ulterior motives. It operates on data, algorithms, and the instructions we feed it. In many ways, AI is a mirror, reflecting both the biases of its creators and the vast potential of its applications.
Perhaps our fear of AI comes from the fact that we’re looking in that mirror and seeing what is terrifying in ourselves? Perhaps this is a great wakeup call for us as a species to recognize that which in ourselves is dangerous, and do something about it?
Finally, I also think of historic mistakes made by our species. Today, AI is that great, feared, unknown. But that wasn’t always the case, was it? Modern day racism is rooted in the same ‘othering’ of outsiders. We made many mistakes, treating (or more accurately mistreating) others outside our known civilizations as savages, less than human, something to be feared.
And we all continue to pay the price for that.
Maybe this is an opportunity not to repeat that mistake. What if, some years from now, AI becomes AL – Artificial Life. Wouldn’t it be nice to be ready to be in a position where we can understand and treat with each other with mutual respect?
Back to today, the actual challenge lies not in the technology but in its application. A well-programmed application that uses AI can help doctors diagnose diseases, assist students in learning, or predict weather patterns with astonishing accuracy. Conversely, an poorly built one can reinforce societal prejudices, make unfair decisions, or invade privacy.
This dichotomy means that our real focus should be on ensuring that our work should be to develop and grow AI and ML responsibly. One underlooked aspect of respobnsible development is in mitigating fear. When allowed to dominate the narrative, fear stunts critical thinking. It can halt progress and can keep us from harnessing the transformative power of AI for the greater good.
Instead of getting wrapped up in a cycle of fear and misinformation, we need to channel our collective energies towards education, open dialogue, and understanding. Collaborative efforts between tech developers, ethicists, policymakers, and the public can ensure that AI’s development remains aligned with humanity’s best interests.
Embracing the future means embracing AI, but with open eyes, clear minds, and fearless hearts. We stand at the cusp of a technological renaissance, and our approach will determine the legacy of this era. As we move forward, let’s remember that the most significant achievements in history were born from hope and understanding, not from fear.
(Note: I used LLMs to help me create this post. They helped me hone my thoughts and sharpen my argument. They did not replace me as a writer, nor did they drive my opinion)