Its transformative nature means the doors it opens swing both ways, offering the chance for both positive and negative outcomes. Those who aren’t part of the AI world, or who don’t work closely with it, have developed a negative perception of artificial intelligence, which so happens to be a large chunk of society. Afraid, skeptical, wary, distrustful, hesitant — the applicable terms are endless. Ironically, these people use AI everyday to unlock their iPhone, or to check their bank account on an app, or to find the best route on their SatNav.
What is AI?
AI, or artificial intelligence, is the simulation of intelligent behavior by machines, specifically computer systems. AI includes a range of techniques and approaches that allow machines to perform tasks that typically require human intelligence, such as learning, problem solving, decision making, and natural language understanding. These techniques include machine learning, deep learning, natural language processing, computer vision, and many others. AI is used in a wide range of applications, from speech recognition and image analysis to autonomous vehicles and chatbots.
Humans may be afraid of AI for a variety of reasons, including:
- Fear of the unknown: AI is a relatively new and rapidly evolving technology, and people may be afraid of what they don’t understand or can’t predict.
- Fear of job loss: As AI becomes more advanced, there is a concern that it will replace human workers in many industries, leading to unemployment and economic disruption.
- Fear of loss of control: There is a worry that as AI becomes more intelligent and autonomous, it may become difficult to control or predict its actions.
- Fear of safety: There are concerns that AI systems may malfunction, causing harm to humans or the environment.
- Fear of ethical issues: There are ethical concerns related to the development and use of AI, such as the potential for bias and discrimination.
- Fear of sci-fi scenarios: Many people have been exposed to depictions of AI in popular culture that present it as a threat to humanity, which can contribute to a general sense of fear and apprehension.
It’s worth noting that not all humans are afraid of AI, and some embrace its potential benefits and opportunities. However, the fear of AI is a significant factor in shaping public perception and policy decisions related to AI development and use.
Another reason why some humans may be afraid of AI is that it challenges our notions of what it means to be human. As AI becomes more advanced, it can perform tasks that were once considered uniquely human, such as creative expression, emotional intelligence, and problem-solving. This can create a sense of unease and even existential anxiety for some people. Additionally, there are concerns about the security of AI systems. As AI becomes more pervasive and interconnected, there is a risk that it could be hacked or manipulated, leading to disastrous consequences. This is particularly concerning in areas such as healthcare, finance, and national security, where AI systems are increasingly being deployed. Furthermore, there is a worry that AI may exacerbate existing social and economic inequalities. For example, if AI systems are designed and trained using biased data, they may perpetuate and even amplify existing biases and discrimination. This could have significant implications for issues such as access to healthcare, employment, and education. Lastly, there is a concern that AI may pose a threat to human autonomy and agency. As AI becomes more prevalent and powerful, there is a risk that it could be used to control or manipulate human behavior, or even to create autonomous weapons that could make decisions without human oversight.
Bad people do bad things
Stephen Hawking himself said that future developments of AI “could spell the end of the human race.” It isn’t unreasonable to think, at some point in the future, AI could be weaponised. With computer science progressing every decade, the systems we have today will be incomparable to what humans develop in the future, as are the potential future threats. If future artificial intelligence technology fell into volatile hands, the consequences could be potentially catastrophic.
No shared base level of understanding
Not understanding something leads to an inherent distrust of it. Our self-preservation instincts immediately view any novum (“new thing”) from the worst possible angle, our point of view only changing once its proven to not be a threat. The fear of the unknown is a powerful thing, and as the human intelligence and artificial intelligence spheres increase, so does the uncertainty. Artificial intelligence is a complex concept that many struggle to fully understand, and yet the sheer amount of data and research available is useless if you don’t know how to interpret or contextualise it.
SuperIntelligence
This is perhaps the most classic fear of AI — that, one day, the computers and robots of the world will rise up against and surpass the humans. Andy Hobsbawn, the chairman for Loops, says, “Thirty years ago the futurist Peter Cochrane said it’s ok if computers land our planes safely but we get all emotional when they beat us at chess. These same fears are being massively amplified by AI today as machines get exponentially more clever and increasingly start to apply their intelligence like human beings.” It’s still unknown whether AI will reach the point where it will no longer need human involvement and has learnt to invent and optimize itself. Artificial General Intelligence (AGI) is still just a concept at this point, but could humans find themselves falling behind computers, unable to keep up with the rapid advancement? This would raise a few difficult to answer questions, like how do we measure intelligence?
We don’t want to be replaced by computers
That threat of being told that computers could do a better job than us, that we could easily be replaced, may have hit home more closely than demanding companies intended. Humans have already seen an example of this takeover in the previous wave of automation. This is a fear shared by employees across all industries, from engineering to marketing to service, but a fervent discussion has broken out in regards to the relationship between AI and creative professions, with some AI now capable of composing music with reinforcement techniques (meaning no input from humans). The job market can be competitive enough when you’re up against other humans — who would want to compete against machines too?
The entertainment industry’s portrayal of artificial intelligence
Undoubtedly, the entertainment industry has played a large part in influencing the public’s attitude towards AI, framing computers as the antagonists in many science fiction films, as threats to humanity. While viewers are able to distinguish fact from fiction, the currents facts around AI are still murky to most, leaving us with images of a bloodied and determined Terminator stuck in our minds.
The psychology behind the fear of AI
Fear is a learned behaviour, and we may be our own worst enemies when faced with overcoming it. With daily exposure to apps like Twitter and TikTok, and content falling into popular formats like Top 10 lists, our demand for easily digestible content has grown while our attention spans have shrunk. Artificial intelligence and machine learning are not a straight forward concepts to explain, nor are regularly made easily digestible. This makes them unpopular topics to casually learn about if there is no pre-existing interest. The effort required to find reputable information and comprehend it outweighs the benefits of casual interest, and so we simply don’t bother. This lack of effort leads to a lack understanding, which ultimately leads to a lack of control, either of the concept itself or personal knowledge of it. Humans are territorial in nature, meaning we like to feel in control in order to feel safe. If something is unknown to us, and therefore outside of our control, like AI, then we fear it.
Will we ever get over the fear of AI?
“The fear [of AI] will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of AI,” says Loops co-founder, Scott Morrison. Sometimes, we find some comfort in the inevitable. Whether we like it or not, artificial intelligence is here to stay, already too ingrained in our everyday lives to simply cut it out.
Loops aims to be part of a positive shift
Artificial intelligence can solve some of the most damaging issues on the macro scale — disease, climate change, terrorism, but issues such as these are solved with baby steps, usually not in the public eye. This means AI’s positive capabilities aren’t observed by most, and therefore go unappreciated and unacknowledged. At Loops, we have an interest in applying machine learning to support creative strategy, building the foundation of what we like to call the “modern creative process”. Loops uses a form of natural language processing to understand how a large audience feels about an idea. Imagine a huge focus group that you can spin up from your laptop, with any group of people in the world, and understand consensus, outliers, and blindspots in a few clicks. This doesn’t hinder creativity, it enables it because you are empowered to explore new ideas, test brave thinking, and get weird, all with zero risk and minimal effort.
In summary, there are many reasons why humans may be afraid of AI, including fear of the unknown, job loss, loss of control, safety concerns, ethical issues, challenges to our notions of humanity, security risks, exacerbating inequalities, and threats to autonomy and agency. While there are certainly benefits to AI, it’s important to be aware of and address these concerns in order to ensure that AI is developed and used in a way that benefits humanity as a whole.