According to experts at MIT, AI can go wrong in 700 ways. These are the 5 most harmful to humanity

As artificial intelligence (AI) technology continues to develop and become increasingly integrated into different aspects of our lives, the need to understand the potential risks these systems pose increases.

Since the introduction of AI and its increasing accessibility to the public, there has been widespread concern about the potential harm AI can cause and the fact that AI can be used for malicious purposes.

Early in the development of AI, leading experts called for a pause in progress and stricter regulation, due to the potential risks to humanity.

Related

Over time, new ways have emerged for AI to cause harm, ranging from non-consensual deepfake pornography, manipulation of political processes, to generating disinformation through hallucinations.

As AI increasingly can be used for malicious purposes, researchers have looked at different scenarios in which AI systems could fail.

Recently, the FutureTech group at the Massachusetts Institute of Technology (MIT), in collaboration with other experts, compiled a new database of over 700 of these potential risks.

They were classified based on their purpose and categorized into seven different domains, with the main concerns related to safety, bias and discrimination, and privacy issues.

Here are five ways AI systems can fail and potentially cause harm, based on this newly released database.

5. AI’s deepfake technology could make it easier to distort reality

As AI technologies develop, tools for voice cloning and deepfake content generation are becoming more accessible, affordable, and efficient.

These technologies have raised concerns about their potential use in spreading disinformation, as the outcomes become increasingly personalized and persuasive.

This allows for more sophisticated phishing attacks using AI-generated images, videos, and audio communications.

“These communications can be tailored to individual recipients (sometimes including the cloned voice of a loved one), increasing the likelihood of success and making them harder for both users and anti-phishing tools to detect,” the preprint reads.

There are also known cases where such instruments have been used to influence political processes, particularly during elections.

Related

For example, AI played an important role in the recent French parliamentary elections, where it was deployed by far-right parties to support political messages.

AI could therefore increasingly be used to generate and spread convincing propaganda or disinformation, potentially manipulating public opinion.

4. People may develop an inappropriate attachment to AI

Another risk that AI systems bring is that a false sense of importance and dependency is created. This can cause people to overestimate the capabilities of AI systems and undermine their own capabilities. This can lead to overdependence on the technology.

In addition, scientists worry that people will be confused by AI systems because they use a human-like language.

This could lead people to ascribe human qualities to AI, leading to emotional dependence and greater trust in its capabilities, making them more vulnerable to AI’s weaknesses in “complex, high-risk situations for which AI is only superficially equipped.”

Furthermore, constant interaction with AI systems may cause people to gradually isolate themselves from human relationships, which can lead to psychological problems and negatively impact their well-being.

For example, in a blog post One person describes how he developed a deep emotional connection with AI, even stating that he “enjoyed talking to AI more than 99% of the time” and that he found AI’s responses so captivating that he became addicted to it.

Related

Similarly, a Wall Street Journal columnist commented on her interaction with Google Gemini Live, noting, “I’m not saying I’d rather talk to Google’s Gemini Live than a real human being. But I’m also not saying I wouldn’t.”

3. AI could take away people’s free will

A worrying issue in the same domain of human-computer interaction is the increasing delegation of decisions and actions to AI as these systems evolve.

While this may seem beneficial at first glance, over-reliance on AI can lead to a decline in people’s critical thinking and problem-solving skills, potentially depriving them of autonomy and reducing their ability to think critically and solve problems independently.

On a personal level, people’s free will may be compromised as AI begins to determine decisions about their lives.

At a societal level, the widespread adoption of AI to take over human tasks could result in significant job displacement and “a growing sense of helplessness among the general population.”

2. AI may pursue goals that conflict with human interests

An AI system may develop goals that conflict with human interests. This could cause the unaligned AI to spin out of control and cause serious harm in pursuing its independent goals.

This becomes especially dangerous in cases where AI systems can match or exceed human intelligence.

According to the MIT paper, AI presents several technical challenges, including the potential to find unexpected shortcuts to rewards, misunderstand or misapply the goals we set, or deviate from them by setting new goals.

In such cases, a misaligned AI may resist human attempts to control or disable it. This is especially true if the AI ​​sees resistance and gaining more power as the most effective way to achieve its goals.

Furthermore, the AI ​​could use manipulative techniques to deceive humans.

Related

According to the article, an AI system that is not well-tuned could use information about whether it is being monitored or evaluated to maintain the appearance of being aligned, while hiding the misaligned goals it plans to pursue once it is deployed or given sufficient powers.

1. If AI becomes conscious, humans may treat it badly

As AI systems become more complex and sophisticated, there is the possibility that they will gain consciousness (the ability to perceive or feel emotions or sensations) and develop subjective experiences, such as pleasure and pain.

In this scenario, scientists and regulators may be faced with the challenge of determining whether these AI systems deserve the same moral considerations as those given to humans, animals, and the environment.

There is a risk that an intelligent AI could face abuse or harm if appropriate rights are not respected.

However, as AI technology develops, it will become increasingly difficult to assess whether an AI system has “achieved the level of consciousness, self-awareness, or sensitivity that would confer upon it moral status.”

Therefore, intelligent AI systems may be at risk of being mishandled, either accidentally or intentionally, without proper rights and protections.

Leave a Comment