Don't share this with your robot
On Monday, May 22, 2023, a verified Twitter account called “Bloomberg Feed” shared a tweet claiming there had been an explosion at the Pentagon, accompanied by an image. If you’re wondering what this has to do with artificial intelligence (AI), the image was an AI-generated one, with the tweet quickly going viral and sparking a brief stock market dip. Things could have been much worse — a stark reminder of the dangers of artificial intelligence.
Artificial Intelligence Dangers
It’s not just fake news we need to worry about. There are many immediate or potential risks associated with AI, from those concerning privacy and security to bias and copyright issues. We’ll dive into some of these artificial intelligence dangers, see what is being done to mitigate them now and in the future, and ask whether the risks of AI outweigh the benefits.
Fake News
Back when deepfakes first landed, concerns arose that they could be used with ill intent. The same could be said for the new wave of AI image generators, like DALL-E 2, Midjourney, or DreamStudio. On March 28, 2023, fake AI-generated images of Pope Francis in a white Balenciaga puffer jacket, and enjoying several adventures, including skateboarding and playing poker went viral. Unless you studied the images closely, it was hard to distinguish these images from the real thing.
While the example with the pope was undoubtedly a bit of fun, the image (and accompanying tweet) about the Pentagon was anything but. Fake images generated by AI have the power to damage reputations, end marriages or careers, create political unrest, and even start wars if wielded by the wrong people — in short, these AI-generated images have the potential to be hugely dangerous if misused.
With AI image generators now freely available for anybody to use, and Photoshop adding an AI image generator to its popular software, the opportunity to manipulate images and create fake news is greater than ever.
Privacy, Security, and Hacking
Privacy and security are also huge concerns when it comes to the risks of AI, with a number of countries already banning OpenAI’s ChatGPT. Italy has banned the model due to privacy concerns, believing it does not comply with the European General Data Protection Regulation (GDPR), while China, North Korea, and Russia’s governments banned it due to fears it would spread misinformation.
So why are we so concerned about privacy when it comes to AI? AI apps and systems gather large amounts of data in order to learn and make predictions. But how is this data stored and processed? There’s a real risk of data breaches, hacking, and information falling into the wrong hands.
It’s not just our personal data that’s at risk, either. AI hacking is a genuine risk — it hasn’t happened yet, but if those with malicious intent could hack into AI systems, this could have serious consequences. For example, hackers could control driverless vehicles, hack AI security systems to gain entry to highly secure locations, and even hack weapons systems with AI security.
Experts at the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) recognize these risks and are already working on the DARPA’s Guaranteeing AI Robustness Against Deception (GARD) project, tackling the problem from the ground up. The project’s goal is to ensure that resistance to hacking and tampering is built into algorithms and AI.
Copyright Infringement
Another of the dangers of AI is copyright infringement. This may not sound as serious as some other dangers we’ve mentioned, but the development of AI models like GPT-4 puts everyone at increased risk of infringement.
Every time you ask ChatGPT to create something for you — whether that be a blog post on travel or a new name for your business — you’re feeding it information which it then uses to answer future queries. The information it feeds back to you could be infringing somebody else’s copyright, which is why it’s so important to use a plagiarism detector and edit any content created by AI before publishing it.
Societal and Data Bias
AI isn’t human, so it can’t be biased, right? Wrong. People and data are used to train AI models and chatbots, which means biased data or personalities will result in a biased AI. There are two types of bias in AI: societal bias and data bias.
With many biases present in everyday society, what happens when these biases become part of AI? The programmers responsible for training the model could have expectations that are biased, which then make their way into AI systems.
Or data used to train and develop an AI could be incorrect, biased, or collected in bad faith. This leads to data bias, which can be as dangerous as societal bias. For example, if a system for facial recognition is trained using mainly white people’s faces, it may struggle to recognize those from minority groups, perpetuating oppression.
Robots Taking Our Jobs
The development of chatbots such as ChatGPT and Google Bard has opened up a whole new worry surrounding AI: The risk that robots will take our jobs. We’re already seeing writers in the tech industry being replaced by AI, software developers worried they’ll lose their jobs to bots, and companies using ChatGPT to create blog content and social media content rather than hiring human writers.
According to the World Economic Forum’s The Future of Jobs Report 2020, AI is expected to replace 85 million jobs worldwide by 2025. Even if AI doesn’t replace writers, it’s already being used as a tool by many. Those in jobs at risk of being replaced by AI may need to adapt to survive — for example, writers may become AI prompt engineers, enabling them to work with tools like ChatGPT for content creation rather than being replaced by these models.
Future Potential AI Risks
These are all immediate or looming risks, but what about some of the less likely but still possible dangers of AI we could see in the future? These include things like AI being programmed to harm humans, for example, autonomous weapons trained to kill during a war.
Then there’s the risk that AI could focus single-mindedly on its programmed goal, developing destructive behaviors as it attempts to accomplish that goal at all costs, even when humans try to stop this from happening.
Skynet taught us what happens when an AI becomes sentient. However, though Google engineer Blake Lemoine may have tried to convince everyone that LaMDA, Google’s artificially intelligent chatbot generator was sentient back in June 2022, there’s thankfully no evidence to date to suggest that’s true.
The Challenges of AI regulation
On Monday, May 15, 202, OpenAI CEO Sam Altman attended the first congressional hearing on artificial intelligence, warning, “If this tech goes wrong, it can go quite wrong.” The OpenAI CO made it clear he favors regulation and brought many of his own ideas to the hearing. The problem is that AI is evolving at such speed, it’s difficult to know where to start with regulation.
Congress wants to avoid making the same mistakes made at the beginning of the social media era, and a team of experts alongside Senate Majority Leader Chuck Schumer are already working on regulations that would require companies to reveal what data sources they used to train models and who trained them. It may be some time before exactly how AI will be regulated becomes clear, though, and no doubt there will be backlash from AI companies.
The Threat of an Artificial General Intelligence
There’s also the risk of the creation of an artificial general intelligence (AGI) that could accomplish any tasks a human being (or animal) could perform. Often mentioned in sci-fi films, we’re probably still decades away from such a creation, but if and when we do create an AGI, it could pose a threat to humanity.
Many public figures already endorse the belief that AI poses an existential threat to humans, including Stephen Hawking, Bill Gates, and even former Google CEO Eric Schmidt, who stated, “Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not misused by evil people.”
So, is artificial intelligence dangerous, and do its risk outweigh its benefits? The jury’s still out on that one, but we’re already seeing evidence of some of the risks around us right now. Other dangers are less likely to come to fruition anytime soon, if at all. One thing is clear, though: the dangers of AI shouldn’t be underestimated. It’s of the utmost importance that we ensure AI is properly regulated from the outset, to minimize and hopefully mitigate any future risks.