U.S. intelligence agencies built Sentient, an ‘artificial brain,’ to collect information, but won’t disclose details


They call it Sentient, an apt name for an artificial intelligence program that’s been in production by U.S. intelligence agencies since 2010 and could soon find its way into our lives in ways we can only imagine.

But can a so-called “artificial brain” be trusted with matters of life and death? Can it even be trusted to keep vital information secure?

 

Many of us already have AI in our homes. It takes the form of digital assistants such as Alexa and Echo (Via YouTube)

As you might expect, few in the military or intel agencies were willing to discuss Sentient, though National Reconnaissance Office deputy director of public affairs Karen Furgerson did have this to say:

“The NRO’s and the Intelligence Community’s standard practice is to NOT disclose sensitive sources and methods, as such disclosure introduces high risk of adversary nations’ countering them. Such loss harms our nation and its allies; it decreases U.S. information advantage and national security. For those reasons, details about Sentient remain classified and what we can say about it is limited.”

The current crop of robots being produced also rely on artificial intelligence. (Via YouTube)

But while the lack of details might lead you to dismiss Sentient as irrelevant or even benign, it’s when you start to consider the larger implications that such a system becomes not just a weapon of warfare, but something that can be used against any of us in the wrong hands.

The Verge notes:

“Released documents do not explicitly say which sorts of data sources Sentient may siphon in, but it’s clear that the program is interested in all kinds of information.

“Retired CIA analyst Allen Thomson goes further. ‘As I understand it, the intended — and aspirational — answer is ‘everything.’ In addition to images, that could include financial data, weather information, shipping stats, information from Google searches, records of pharmaceutical purchases, and more, he says.”

Consider BlackSky, a company that owns satellites which orbit above Earth and gather information from across the world. BlackSky is a privately owned firm and can share its data with anyone who pays the right price:

“BlackSky takes data from 25 satellites, more than 40,000 news sources, 100 million mobile devices, 70,000 ships and planes, eight social networks, 5,000 environmental sensors, and thousands of Internet-of-Things devices. In the future, it plans to have up to 60 of its own Earth-observing satellites. All of that information goes into different processing pipelines based on its type. From a news story, BlackSky may extract people, places, organizations, and keywords. From an image, it may map out which buildings appear damaged after an earthquake.”

Are we taking AI too far? Does it threaten our privacy? (Via NeedPix)

Another question we have to ask: Does an AI program like Sentient have inherent biases, based on the person or people who created it? If so, it might unfairly see the word “bomb” and begin targeting people of Middle Eastern origin even though the bomb in question has been created by a white supremacist terror group. That alone should give us second thoughts.

Ideally, we would be able to trust our government and private companies to ensure that our personal information is kept secret, but as we’ve repeatedly seen with social media and commerce sites, that’s simply not the case. If anything, many of them take that data and resell it to make additional profits, even if they didn’t fully disclose their intentions.

For now, Sentient remains shrouded in almost absolute secrecy. But that alone may be reason enough to worry about how it will ultimately be used.

 

Here’s more on how artificial intelligence is already changing our lives:

 


Featured Image Via Pixabay

 


Like it? Share with your friends!