Researchers develop an AI so advanced, they are afraid to release it


As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

Researchers Develop an AI so Advanced, They Are Afraid to Release it

OpenAI, a non-profit artificial intelligence research group backed by the great Elon Musk has created an AI system that has the ability to write news and works of fiction so good, its creators are afraid to release it.

Despite the fact that their project was a success, they have decided to not release their research to the public, fearing that if it ended up in the wrong hands, it could create chaos.

Reports suggest that their new AI model, GPT2 is so good at its work, they are scared of the dangers an AI of that type could do in the online world.

Essentially, GPT2 is a text generator. The system is offered a source, like text, or entire pages, and then asked to write the next few sentences based on what it will predict should come next.

“GPT–2 is trained with a simple objective: predict the next word, given all of the previous words within some text,” the OpenAI team explained researchers in a blog post.

According to researchers, the AI has pushed the boundaries of what was thought possible.

What the AI produces is quality, and it is difficult to discern something written by GPT2 and an actual human writer. The AI has the ability to generate powerful, original paragraphs of text based on what it reads.

The Ai makes use of “synthetic text samples of unprecedented quality” that its developers say is so advanced and convincing, if it ends up in the wrong hands, the AI could be used to create fake news

And that’s the worrying part.

Researchers gave GPT2 a dataset containing more than eight million webpages, and let the system read and absorb it.

They then could converse with the AI on the topic, depending on how it understands it.

It can even create convincing stories ranging from climate change, general news, entertainment, history, or even science fiction.

Basically, it has little to no limitations.

Here are a few—scary—examples of what the AI can do below:

SYSTEM PROMPT (HUMAN-WRITTEN):

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY):

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

You can see more examples of AI-generated stories here.

As researchers explain, based on the above, you can see the potential of the AI, as it can generate a variety of text that feels eerily close to human quality, showing coherence over a page or more of text.


Like it? Share with your friends!