Regístrese ahora para una mejor cotización personalizada!

AI's true goal may no longer be intelligence

Oct, 28, 2022 Hi-network.com

AI has been rapidly finding industrial applications, such as the use of large language models to automate enterprise IT. Those applications may make the question of actual intelligence moot.

Tiernan Ray/

The British mathematician Alan Turing wrote in 1950, "I propose to consider the question, 'Can machines think?'" His inquiry framed the discussion for decades of artificial intelligence research.

For a couple of generations of scientists contemplating AI, the question of whether "true" or "human" intelligence could be achieved was always an important part of the work. 

AI may now be at a turning point where such questions matter less and less to most people. 

Also: The best AI chatbots: ChatGPT and alternatives to try

The emergence of something called industrial AI in recent years may signal an end to such lofty preoccupations. AI has more capability today than at any time in the 66 years since the term AI was first coined by computer scientist John McCarthy. As a result, the industrialization of AI is shifting the focus from intelligence to achievement.

Those achievements are remarkable. They include a system that can predict protein folding, AlphaFold, from Google's DeepMind unit, and the text generation program GPT-3 from startup OpenAI. Both of those programs hold tremendous industrial promise irrespective of whether anyone calls them intelligent. 

Also: How to use ChatGPT: Everything you need to know

Among other things, AlphaFold holds the promise of designing novel forms of proteins, a prospect that has electrified the biology community. GPT-3 is rapidly finding its place as a system that can automate business tasks, such as responding to employee or customer queries in writing without human intervention.

That practical success, driven by a prolific semiconductor field, led by chipmaker Nvidia, seems like it might outstrip the old preoccupation with intelligence. 

In no corner of industrial AI does anyone seem to care whether such programs are going to achieve intelligence. It is as if, in the face of practical achievements that demonstrate obvious worth, the old question, "But is it intelligent?" ceases to matter.

Also:AI critic Gary Marcus: Meta's LeCun is finally coming around to the things I said years ago

As computer scientist Hector Levesque has written, when it comes to thescienceof AI versus thetechnology, "Unfortunately, it is the technology of AI that gets all the attention."

To be sure, the question of genuine intelligence does still matter to a handful of thinkers. In the past month, has interviewed two prominent scholars who are very much concerned with that question.

Yann LeCun, chief AI scientist at Facebook owner Meta, spoke at length with about a paper he put out this summer as a kind of think piece on where AI needs to go. LeCun expressed concern that the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as "true" intelligence, which includes things such as an ability for a computer system to plan a course of action using common sense. 

Also: This new technology could blow away GPT-4 and everything like it

LeCun expresses an engineer's concern that without true intelligence, such programs will ultimately prove brittle, meaning, they could break before they ever do what we want them to do. 

"You know, I think it's entirely possible that we'll have Level 5 autonomous cars without common sense," LeCun told , referring to the efforts of Waymo and others to build ADAS (advanced driver assistance systems) for self-driving, "but you're going to have to engineer the hell out of it." 

And NYU professor emeritus Gary Marcus, a frequent critic of deep learning, told this month that AI as a field is stuck as far as finding anything like human intelligence. 

"I don't want to quibble over whether it is or is not intelligence," Marcus told . "But the form of intelligence that we might call general intelligence or adaptive intelligence, I do care about adaptive intelligence [...] We don't have machines like that."

Meta's Yann LeCun (right) and AI critic Gary Marcus.

Increasingly, the concerns of both LeCun and Marcus seem quaint. Industrial AI professionals don't want to ask hard questions, they merely want things to run smoothly. As more and more people get their hands on AI, people such as data scientists and self-driving car engineers, people removed from the fundamental scientific questions of research, the question "Can machines think?" becomes less relevant. 

Even scientists who realize the shortcomings of AI are tempted to put that aside to relish the practical utility of the technology.

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

A scholar younger than either Marcus or LeCun, but mindful of the dichotomy of the practical and the profound, is Demis Hassabis, co-founder of DeepMind. 

At a talk in 2019 at the Institute for Advanced Study in Princeton, New Jersey, Hassabis noted the limits of many AI programs that could only do one thing well, like an idiot savant. DeepMind, said Hassabis, is trying to develop a broader, richer capability. "We are trying to find a meta-solution to solve other problems," he said.

And yet, Hassabis is just as enamored of the particular tasks at which the latest DeepMind invention excels.

Also: How to use ChatGPT to write Excel formulas

When DeepMind recently unveiled an improved way to perform linear algebra, the mathematics at the heart of deep learning, Hassabis extolled the achievement irrespective of any claims of intelligence. 

"Turns out everything is a matrix multiplication, from computer graphics to training neural networks," Hassabis wrote on Twitter. Perhaps that's true, but it holds the prospect of dismissing the quest for intelligence in favor of simply refining a tool, as if to say, If it works, why ask why? 

The field of AI is undergoing a shift in attitude. It used to be the case that every achievement of an AI program, no matter how good, would be received with the skeptical remark, "Well, but that doesn't mean it's intelligent." It's a pattern that the AI historian Pamela McCorduck has called "moving the goalposts."

Also: How to use ChatGPT to write code

Nowadays, things seem to run the opposite way: People are inclined to casually ascribe intelligence to anything and everything labeled AI. If a chat bot such as Google's LAMDA produces enough natural-language sentences, someone will argue it's sentient.

The British mathematician Alan Turing anticipated that "general educated opinion" would come to accept that machines have intelligence.

Turing himself anticipated this change in attitude. He predicted that ways of talking about computers and intelligence would shift in favor of accepting computerbehavioras intelligent. 

"I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted," wrote Turing.

As the sincere question of intelligence fades, the empty rhetoric of intelligence is allowed to float freely in society to serve other agendas.

Also: Nvidia CEO Jensen Huang: AI language models as-a-service "potentially one of the largest software opportunities ever"

In a brilliantly confused encomium in Fast Company recently, penned by a computer industry exec, Michael Hochberg, and a retired Air Force general, Robert Spalding, the authors make glib assertions about intelligence as a way to add organ music to their dire warning of geopolitical risk: 

The stakes could not be higher in training artificial general intelligence systems. AI is the first tool that convincingly replicates the unique capabilities of the human mind. It has the ability to create a unique, targeted user experience for every single citizen. This can potentially be the ultimate propaganda tool, a weapon of deception and persuasion the likes of which has not existed in history. 

Most scholars would agree that "artificial general intelligence," if it even makes sense as a term, is by no means close to being achieved by today's technology. The claims of Hochberg and Spalding as to what the programs can do are wildly exaggerated. 

Also: AI might enable us to talk to animals soon. Here's how

Such cavalier assertions about what AI is accomplishing obscure the nuanced remarks of individuals such as LeCun and Marcus. A rhetorical regime is forming that is concerned with persuasion, not with intelligence. 

That may be the direction of things for the foreseeable future. If AI increasingly gets stuff done, in biology, in physics, in business, in logistics, in marketing, and in warfare, and as society becomes comfortable with it, there may be fewer and fewer people who even care to ask,But is it intelligent? 

More on AI tools

Android 14's AI-generated wallpapers are super fun. Here's how to create themHow to use ChatGPT to plan a vacationHow to use ChatGPT to create an appHow to get rid of My AI on Snapchat for goodHow to use Bing Image Creator (and why it's better than ever)How to write better ChatGPT prompts for the best generative AI results
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • How to use ChatGPT to plan a vacation
  • How to use ChatGPT to create an app
  • How to get rid of My AI on Snapchat for good
  • How to use Bing Image Creator (and why it's better than ever)
  • How to write better ChatGPT prompts for the best generative AI results

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.