Open in App
The US Sun

Three times artificial intelligence has scared scientists – from creating chemical weapons to claiming it has feelings

By Tyler Baum,

2022-09-13
https://img.particlenews.com/image.php?url=2kiqOS_0htXU3pg00

THE artificial intelligence revolution has only just begun, but there have already been numerous unsettling developments.

AI programs can be used to act on humans' worst instincts or achieve humans' more wicked goals, like creating weapons or terrifying its creators with a lack of morality.

https://img.particlenews.com/image.php?url=064b1N_0htXU3pg00
Visionaries like Elon Musk think uncontrolled AI could lead to humanity's extinction Credit: Getty Images - Getty

What is artificial intelligence?

Artificial intelligence is a catch-all phrase for a computer program designed to simulate, mimic or copy human thinking processes.

For example, an AI computer designed to play chess is programmed with a simple objective: win the game.

In the process of playing, the AI will model millions of potential outcomes of a given move and act on the one that gives the computer the best chance of winning.

A skilled human player will act similarly, analyzing moves and their consequences, but without the perfect recall, speed or rigidity of a computer.

AI can be applied to numerous fields and technologies.

Self-driving cars aim to reach their destination, and take in stimuli like signage, pedestrians and roads along the way, just like a human driver would.

AI programs have also made unexpected turns and stunned researchers with their dangerous tendencies or applications.

AI invents new chemical weapons

In March 2022, researchers revealed that artificial intelligence invented 40,000 new possible chemical weapons in just six hours.

Scientists sponsored by an international security conference said that an AI bot came up with chemical weapons similar to one of the most dangerous nerve agents of all time, called VX.

VX is a tasteless and odorless nerve agent and even the smallest drop can cause a human to sweat and twitch.

"The way VX is lethal is it actually stops your diaphragm, your lung muscles, from being able to move so your lungs become paralyzed," Fabio Urbina, the lead author of the paper, told The Verge.

"The biggest thing that jumped out at first was that a lot of the generated compounds were predicted to be actually more toxic than VX," Urbina continued.

The dataset that powered the AI model is publicly available for free, meaning a threat actor with access to a comparable AI model could plug the open source data in and use it to create an arsenal of weapons.

"All it takes is some coding knowledge to turn a good AI into a chemical weapon-making machine."

AI claims it has feelings

A Google engineer named Blake Lemoine made widely publicized claims that the company's Language Model for Dialogue Applications (LaMDA) bot was awake with consciousness and had feelings.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," Lemoine told the Washington Post in June 2022.

Google pushed back against his claims.

Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, "the evidence does not support his claims."

"[Lemoine] was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)," Gabriel said.

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient."

Google placed Lemoine on administrative leave and later fired him.

Cannibal AI

Researcher Mike Sellers developed a social AI program for the Defense Advanced Research Projects Agency in the early 2000s.

"For one simulation, we had two agents, naturally enough named Adam and Eve. They started out knowing how to do things, but not knowing much else.

"They knew how to eat for example, but not what to eat," Sellers explained in a Quora blog.

The developers placed an apple tree inside the simulation and the AI agents would receive a reward for eating apples to simulate the feeling of satisfying hunger.

If they ate the tree's bark or the house inside the simulation, the reward would not be triggered.

A third AI agent named Stan was also placed inside the simulation.

Stan was present while Adam and Eve ate the apples, and they began to associate Stan with eating apples and satisfying hunger.

"Adam and Eve finished up the apples on the tree and were still hungry. They looked around assessing other potential targets. Lo and behold, to their brains, Stan looked like food," Sellers wrote.

“So they each took a bite out of Stan."

https://img.particlenews.com/image.php?url=3sfbHU_0htXU3pg00https://img.particlenews.com/image.php?url=22XvGd_0htXU3pg00

The AI revolution has begun to take shape in our world - artificially intelligent bots will continue to make life easier, replace human workers, and become more responsible and autonomous.

But there have been several horrifying instances of AI programs doing the unexpected, giving legitimacy to the growing fear of AI.

Expand All
Comments / 0
Add a Comment
YOU MAY ALSO LIKE
Most Popular newsMost Popular

Comments / 0