Thinking about thinking machines 🧠

Plus AI planes

Thumbnail showing the Logo and a Screenshot of Thinking about thinking machines 🧠

Turing, take two

As anyone familiar with AI will know, revered mathematician and computer scientist Alan Turing came up with the Turing Test more than 70 years ago to act as a barometer for the progress of machine learning; when a computer shows a level of human-like comprehension that convinces someone they are talking with another person, the machine has passed the test.

Thanks to advances in generative AI, there have been several reports in recent years of AI systems passing the Turing Test. But for some, that’s just the first obstacle. The real challenge comes when AI demonstrates true consciousness.

There was somewhat of a brouhaha last year when a Google engineer claimed the company’s LaMDA large language model had demonstrated sentience. Now, I’m not saying the Googler was wrong, but having used Bard for a while, I would say we’re still a way from the Don’t be Evil company developing an apex species.

Nonetheless, some industry experts are seriously thinking about AI and the next potential major leap. A collective of 19 computer scientists, philosophers, and neuroscientists have put together a list of criteria that they say could identify when an AI system is truly thinking for itself.

But before we all freak out and run for our lives, the debate about what counts as AI consciousness is still unclear. As Nature notes, the scientific community is yet to reach agreement on consciousness in the biological world. (Obviously humans have consciousness, but even that can be tested when made to sit through the most recent Indiana Jones movies.)

For the purpose of crafting their AI criteria, the group of thinkers and researchers agreed that consciousness relates to how systems - biological or otherwise - process information. They then decided that systems that can carry out cognitive tasks independently and in parallel are demonstrating consciousness. A simple example of this is animals that can see and hear at the same time are demonstrating consciousness.

Of course, all of this is still in the realm of theory right now. I mean, the smartphones in our pockets have multiple mics and cameras. Add in a little AI juice and how big is the leap before they can “hear” and “see” independently? Would that be real consciousness?

The positive in all of this is that the collective have published their work for peer review, which should see others weigh in on the debate. Hopefully this will lead to the development of a usable framework to help us recognize when the machines truly are sentient. I mean, I definitely want to know if there’s a risk my toaster might come at me.

Why it matters:

Establishing benchmarks for AI is not just about heading off some future humans vs. machines catastrophe - it’s also about inspiring confidence and understanding of the world we’re building right now. The Turing Test allowed society to frequently check in on where things stood. With new parameters we might better appreciate where things are going.

A sky filled with AI

Moviegoers were so excited to see Tom Cruise return as fighter pilot Maverick in last year’s Top Gun sequel that the film earned $1.5 billion at the box office. But if the US Air Force has its way, the next star of the skies could be the XQ-58A Valkyrie - an AI-powered pilot-free aircraft.

As reported by The New York Times (paywall), the ambition is to have the AI drones flying alongside human pilots as “robot wingmen.” (Hat-tip to George Lucas for coming up with something similar when pairing Luke Skywalker with R2-D2 in that X-wing all those decades ago.)

To the untrained eye, the Valkyrie drones look like a fighter jet, minus the cockpit. But the aircraft is jammed full of sensors and bleeding edge tech, enabling it to identify targets for potential destruction.

It’s important to note the actual decision to fire on an enemy will still fall to a human. But the prospect of an AI machine flying overhead, marking out things to blow up has raised concerns. Mary Wareham of Human Rights Watch says the Valkyrie program is “stepping over a moral line by outsourcing killing to machines.”

Of course, opposition to the use of advanced technologies in global conflicts is nothing new; many questioned the use of military drones during the Obama administration. And yet, all these years on, they are still roaming the skies.

The US Air Force has said the Valkyrie project is priced at an estimated $6 billion over the coming five years, with each drone costing between $3 million and $25 million, depending on the technology onboard.

Congress is yet to sign off on the plan, but it’s very possible robo-pilots could be giving Cruise a run for his money in Top Gun 3.

Why it matters:

An AI-powered autonomous aircraft flying dangerous missions has some obvious benefits for the US Air Force - not least of which is helping safeguard the lives of pilots and people on the ground. But using AI to help determine which lives matter is still an ethical debate - one that’s unlikely to be settled anytime soon.

Profile Picture of Tom Wilton

Written By: Tom Wilton

Lead Newsletter Writer

Published Date: Aug 29, 2023

Explore AI's journey towards true consciousness and the ethical debates around AI in combat roles in our latest analysis.

By clicking “Accept”, you agree AllThingsAI can store cookies on your device and disclose information in accordance with our Cookie Policy.