British mathematician Alan Turing wrote in 1950: “I propose to consider the question, “Can machines think?”””””””””””’His inquiry formed the basis of a debate over decades of artificial intelligence research.
For several generations of scientists thinking about artificial intelligence, the question of whether “real” or “human intelligence” can be achieved was always an important part of the work.
AI may now be at a tipping point where such questions matter less and less to most people.
The emergence of industrial artificial intelligence in recent years may signal the end of such lofty concerns. Artificial intelligence today has more capabilities than at any time in the 66 years since computer scientist John McCarthy coined the term artificial intelligence. As a result, the industrialization of artificial intelligence is shifting the focus from intelligence to achievement.
Also: OpenAI’s Dall•E 2 may mean we never need stock photos again
These achievements are significant. They include a system that can predict protein folding, AlphaFold from Google’s DeepMind unit, and the text generator GPT-3 from startup OpenAI. Both programs have huge industrial promise, regardless of whether anyone calls them smart.
Among other things, AlphaFold has the promise of designing new protein forms, which has electrified the biology community. GPT-3 is quickly finding its place as a system that can automate business tasks, such as answering employee or customer inquiries in writing, without human intervention.
Driven by a prolific semiconductor field led by chipmaker Nvidia, this practical success looks like it will outpace old intelligence.
No one in any corner of industrial AI seems to care whether such programs achieve intelligence. There is, as it were, in the face of practical accomplishments that demonstrate obvious value, the old question, “But is it smart?” ceases to matter.
Also: AI Critic Gary Marcus: Meta’s LeCun is finally getting around to the things I said years ago
As computer scientist Hector Levesque has written, when it comes down to it science AI vs technology“Unfortunately, AI technology is getting all the attention.”
The question of genuine intelligence is certainly still important to a handful of thinkers. Over the past month, ZDNET has interviewed two prominent researchers who are deeply concerned about this question.
Yann LeCun, chief AI researcher at Facebook owner Meta Properties, spoke at length with ZDNET about a paper he published this summer as a sort of think piece on where AI needs to go. LeCun expressed concern that the dominant work in deep learning today, if it just continues on its current course, will fall short of what he calls “true” intelligence, which includes, for example, the ability of a computer system to plan a course of action. using common sense.
LeCun expresses an engineer’s concern that without real intelligence, such programs will eventually prove brittle, meaning they may break before they ever do what we want them to do.
Also: Meta AI guru LeCun: Most of today’s AI methods never lead to real intelligence
“You know, I think it’s entirely possible that we’ll have Level 5 autonomous cars without common sense,” LeCun told ZDNET, referring to efforts by Waymo and others to build ADAS (advanced driver assistance systems) for self-driving. but you have to plan the hell out of it.”
And NYU professor emeritus Gary Marcus, a frequent critic of deep learning, told ZDNET this month that AI as a field is stuck on finding human-like intelligence.
“I don’t want to argue whether it’s intelligence or not,” Marcus told ZDNET. “But the form of intelligence that we might call general intelligence or adaptive intelligence, I care about adaptive intelligence […] We don’t have such machines.”
Increasingly, both LeCun’s and Marcus’ concerns seem attractive. Industrial AI professionals don’t want to ask tough questions, they just want things to run smoothly. As more and more people get their hands on artificial intelligence, people like data scientists and self-driving car engineers, people who are moving away from the basic scientific questions of research, the question “Can machines think?” becomes less relevant.
Even researchers who understand AI’s shortcomings are tempted to put it aside to enjoy the technology’s practical utility.
Also: The best travel agency is an AI algorithm
A teacher younger than Marcus or LeCun, but aware of the dichotomy between the practical and the profound, is Demis Hassabis, co-founder of DeepMind.
In a 2019 speech at the Institute for Advanced Study in Princeton, New Jersey, Hassabis noted the limitations of many AI programs that could only do one thing well, like an idiot scientist. DeepMind, Hassabis said, is trying to develop a broader, richer capability. “We’re trying to find a meta-solution to solve other problems,” he said.
And yet, Hassabis is just as enamored with the special tasks where the latest DeepMind invention excels.
When DeepMind recently revealed an improved way to perform linear algebra, the math at the heart of deep learning, Hassabis hailed the achievement regardless of intelligence claims.
“Turns out everything is matrix multiplication, from computer graphics to neural network training,” Hassabis wrote on Twitter. Perhaps that is true, but there is an opportunity to abandon the pursuit of intelligence and simply refine the tool, as if to say: If it works, why ask why?
A change of attitude is underway in the field of artificial intelligence. It used to be that every achievement of an AI program, no matter how good, was met with the skeptical remark: “Well, but that doesn’t mean it’s smart.” It’s a pattern that AI historian Pamela McCorduck has called “moving the goalposts.”
Today, things seem to be going the other way around: People tend to exaggerate the intelligence of anything and everything labeled as artificial intelligence. If a chatbot like Google’s LAMDA produces enough natural language sentences, someone claims it is sentient.
Turing himself expected this change of attitude. He predicted that ways of talking about computers and intelligence would change in favor of computer acceptance behavior like smart.
“I believe that by the end of the century the usage of words and general educated opinion have changed so much that one can speak of machine thinking without expecting contradiction,” wrote Turing.
As the sincere question of intelligence fades away, the empty rhetoric of intelligence is allowed to float freely in society to serve other agendas.
Also: Nvidia CEO Jensen Huang: AI language models as a service ‘potentially one of the biggest software opportunities ever’
In a recent brilliantly confused Fast Company op-ed by computer executive Michael Hochberg and retired Air Force general Robert Spalding, the authors make vague claims about intelligence as a way to add organ music to their geopolitical warning. risk:
The stakes couldn’t be higher in training general AI systems. Artificial intelligence is the first tool that convincingly reproduces the unique characteristics of the human mind. It has the ability to create a unique, targeted user experience for each citizen. This could potentially be the ultimate propaganda tool, a weapon of deception and persuasion unlike any other in history.
Most researchers would agree that “general AI,” if that’s even a reasonable term, is nowhere near what current technology has achieved. Hochberg and Spalding’s claims about what the programs can do are wildly exaggerated.
Such nonsensical claims about the making of artificial intelligence obscure the nuanced remarks of individuals like LeCun and Marcus. A rhetorical system is formed that is about persuasion, not intelligence.
That may be the direction of things in the near future. If artificial intelligence does more and more things in biology, physics, business, logistics, marketing and warfare, and as society gets used to it, there may be fewer and fewer people who even want to ask, But is it smart?
#real #goal #artificial #intelligence #longer #intelligence