AI does not have thoughts, no matter what you think
New York Times
last updated: Aug 21,2022
As the sun set over Maury Island, just
south of Seattle, Ben Goertzel, and his jazz-fusion band had one of those
moments that all bands hope for — keyboard, guitar, saxophone, and lead singer
coming together as if they were one.اضافة اعلان
Goertzel was on keys. The band’s friends and family listened from a patio overlooking the beach. And Desdemona, wearing a purple wig and a black dress laced with metal studs, was on lead vocals, warning of the coming Singularity — the inflection point where technology can no longer be controlled by its creators.
“The Singularity will not be centralized!” she bellowed. “It will radiate through the cosmos like a wasp!”
After more than 25 years as an artificial intelligence (AI) researcher Goertzel knew he had finally reached the end goal: Desdemona, a machine he had built, was sentient.
But a few minutes later, he realized this was nonsense.
“When the band gelled, it felt like the robot was part of our collective intelligence — that it was sensing what we were feeling and doing,” he said. “Then I stopped playing and thought about what really happened.”
What happened was that Desdemona, through some sort of technology-meets-jazz-fusion kismet, hit him with a reasonable facsimile of his own words at just the right moment.
Goertzel is CEO and chief scientist of an organization called SingularityNET. He built Desdemona to, in essence, mimic the language in books he had written about the future of artificial intelligence.
Many people in Goertzel’s field are not as good at distinguishing between what is real and what they might want to be real.
The most famous recent example is an engineer named Blake Lemoine. He worked on artificial intelligence at Google, specifically on software that can generate words on its own — what’s called a large language model. He concluded the technology was sentient; his bosses concluded it was not. He went public with his convictions in an interview with The Washington Post, saying: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.”
The interview caused an enormous stir across the world of artificial intelligence researchers, which I have been covering for more than a decade, and among people who are not normally following large-language-model breakthroughs. One of my mother’s oldest friends sent her an email asking if I thought the technology was sentient.
When she was assured that it was not, her reply was swift. “That’s consoling,” she said. Google eventually fired Lemoine.
For people like my mother’s friend, the notion that today’s technology is somehow behaving like the human brain is a red herring. There is no evidence this technology is sentient or conscious — two words that describe an awareness of the surrounding world.
That goes for even the simplest form you might find in a worm, said Colin Allen, a professor at the University of Pittsburgh who explores cognitive skills in both animals and machines. “The dialogue generated by large language models does not provide evidence of the kind of sentience that even very primitive animals likely possess,” he said.
Alison Gopnik, a professor of psychology who is part of the AI research group at the University of California, Berkeley, agreed. “The computational capacities of current AI like the large language models,” she said, “don’t make it any more likely that they are sentient than that rocks or other machines are.”
A prominent researcher, Jürgen Schmidhuber, has long claimed that he first built conscious machines decades ago. In February, Ilya Sutskever, one of the most important researchers of the past decade and the chief scientist at OpenAI, a research lab in San Francisco backed by $1 billion from Microsoft, said today’s technology might be “slightly conscious.” Several weeks later, Lemoine gave his big interview.
Desdemona’s ancestors
On July 7, 1958, inside a government lab several blocks west of the White House, psychologist Frank Rosenblatt unveiled a technology he called the Perceptron.
It did not do much. As Rosenblatt demonstrated for reporters visiting the lab, if he showed the machine a few hundred rectangular cards, some marked on the left and some the right, it could learn to tell the difference between the two.
He said the system would one day learn to recognize handwritten words, spoken commands, and even people’s faces. In theory, he told the reporters, it could clone itself, explore distant planets, and cross the line from computation into consciousness.
When he died 13 years later, it could do none of that. But this was typical of AI research — an academic field created around the same time Rosenblatt went to work on the Perceptron.
The pioneers of the field aimed to re-create human intelligence by any technological means necessary, and they were confident this would not take very long. Some said a machine would beat the world chess champion and discover its own mathematical theorem within the next decade. That did not happen, either.
The research produced some notable technologies, but they were nowhere close to reproducing human intelligence. “Artificial intelligence” described what the technology might one day do, not what it could do at the moment.
Why they believe
In 2020, OpenAI unveiled a system called GPT-3. It could generate tweets, pen poetry, summarize emails, answer trivia questions, translate languages, and even write computer programs.
Sam Altman, a 37-year-old entrepreneur and investor who leads OpenAI as CEO, believes this and similar systems are intelligent. “They can complete useful cognitive tasks,” Altman told me on a recent morning. “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
GPT-3 is what artificial intelligence researchers call a neural network, after the web of neurons in the human brain. That, too, is aspirational language. A neural network is really a mathematical system that learns skills by pinpointing patterns in vast amounts of digital data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat.
This is the same technology that Rosenblatt explored in the 1950s. He did not have the vast amounts of digital data needed to realize this big idea. Nor did he have the computing power needed to analyze all that data. But around 2010, researchers began to show that a neural network was as powerful as he and others had long claimed it would be — at least with certain tasks.
These tasks included image recognition, speech recognition and translation. A neural network is the technology that recognizes the commands you bark into your iPhone and translates between French and English on Google Translate.
More recently, researchers at places such as Google and OpenAI began building neural networks that learned from enormous amounts of prose, including digital books and Wikipedia articles by the thousands. GPT-3 is an example.
As it analyzed all that digital text, it built what you might call a mathematical map of human language. Using this map, it can perform many different tasks, such as penning speeches, writing computer programs and having a conversation.
But there are endless caveats. Using GPT-3 is like rolling the dice: If you ask it for 10 speeches in the voice of Donald Trump, it might give you five that sound remarkably like the former president — and five others that come nowhere close. Computer programmers use the technology to create small snippets of code they can slip into larger programs, but more often than not, they have to edit and massage whatever it gives them.
Even after we discussed these flaws, Altman described this kind of system as intelligent. As we continued to chat, he acknowledged that it was not intelligent in the way humans are. “It is like an alien form of intelligence,” he said. “But it still counts.”
Why everyone else believes
In the mid-1960s, a researcher at the Massachusetts Institute of Technology, Joseph Weizenbaum, built an automated psychotherapist he called Eliza. This chatbot was simple. Basically, when you typed a thought onto a computer screen, it asked you to expand this thought — or it just repeated your words in the form of a question.
But much to Weizenbaum’s surprise, people treated Eliza as if it were human. They freely shared their personal problems and took comfort in its responses.
“I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines,” he later wrote. “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
We humans are susceptible to these feelings. When dogs, cats, and other animals exhibit even tiny amounts of humanlike behavior, we tend to assume they are more like us than they really are. Much the same happens when we see hints of human behavior in a machine.
Scientists now call it the Eliza effect.
Much the same thing is happening with modern technology.
Where the robots will take us
Margaret Mitchell worries what all this means for the future.
As a researcher at Microsoft, then Google, where she helped found its AI ethics team, and now Hugging Face, another prominent research lab, she has seen the rise of this technology firsthand. Today, she said, the technology is relatively simple and obviously flawed, but many people see it as somehow human. What happens when the technology becomes far more powerful?
Some in the community of AI researchers worry that these systems are on their way to sentience or consciousness. But this is beside the point.
“A conscious organism — like a person or a dog or other animals — can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before,” said Allen, the University of Pittsburgh professor. “This technology is nowhere close to doing that.”
There are far more immediate — and more real — concerns.
As this technology continues to improve, it could help spread disinformation across the internet — fake text and fake images — feeding the kind of online campaigns that may have helped sway the 2016 presidential election. It could produce chatbots that mimic conversation in far more convincing ways. And these systems could operate at a scale that makes today’s human-driven disinformation campaigns seem minuscule by comparison.
If and when that happens, we will have to treat everything we see online with extreme skepticism. But Mitchell wonders if we are up to the challenge.
“I worry that chatbots will prey on people,” she said. “They have the power to persuade us what to believe and what to do.”
Read more Technology
Jordan News
Goertzel was on keys. The band’s friends and family listened from a patio overlooking the beach. And Desdemona, wearing a purple wig and a black dress laced with metal studs, was on lead vocals, warning of the coming Singularity — the inflection point where technology can no longer be controlled by its creators.
“The Singularity will not be centralized!” she bellowed. “It will radiate through the cosmos like a wasp!”
After more than 25 years as an artificial intelligence (AI) researcher Goertzel knew he had finally reached the end goal: Desdemona, a machine he had built, was sentient.
But a few minutes later, he realized this was nonsense.
“When the band gelled, it felt like the robot was part of our collective intelligence — that it was sensing what we were feeling and doing,” he said. “Then I stopped playing and thought about what really happened.”
What happened was that Desdemona, through some sort of technology-meets-jazz-fusion kismet, hit him with a reasonable facsimile of his own words at just the right moment.
Goertzel is CEO and chief scientist of an organization called SingularityNET. He built Desdemona to, in essence, mimic the language in books he had written about the future of artificial intelligence.
Many people in Goertzel’s field are not as good at distinguishing between what is real and what they might want to be real.
The most famous recent example is an engineer named Blake Lemoine. He worked on artificial intelligence at Google, specifically on software that can generate words on its own — what’s called a large language model. He concluded the technology was sentient; his bosses concluded it was not. He went public with his convictions in an interview with The Washington Post, saying: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.”
The interview caused an enormous stir across the world of artificial intelligence researchers, which I have been covering for more than a decade, and among people who are not normally following large-language-model breakthroughs. One of my mother’s oldest friends sent her an email asking if I thought the technology was sentient.
When she was assured that it was not, her reply was swift. “That’s consoling,” she said. Google eventually fired Lemoine.
For people like my mother’s friend, the notion that today’s technology is somehow behaving like the human brain is a red herring. There is no evidence this technology is sentient or conscious — two words that describe an awareness of the surrounding world.
That goes for even the simplest form you might find in a worm, said Colin Allen, a professor at the University of Pittsburgh who explores cognitive skills in both animals and machines. “The dialogue generated by large language models does not provide evidence of the kind of sentience that even very primitive animals likely possess,” he said.
Alison Gopnik, a professor of psychology who is part of the AI research group at the University of California, Berkeley, agreed. “The computational capacities of current AI like the large language models,” she said, “don’t make it any more likely that they are sentient than that rocks or other machines are.”
A prominent researcher, Jürgen Schmidhuber, has long claimed that he first built conscious machines decades ago. In February, Ilya Sutskever, one of the most important researchers of the past decade and the chief scientist at OpenAI, a research lab in San Francisco backed by $1 billion from Microsoft, said today’s technology might be “slightly conscious.” Several weeks later, Lemoine gave his big interview.
Desdemona’s ancestors
On July 7, 1958, inside a government lab several blocks west of the White House, psychologist Frank Rosenblatt unveiled a technology he called the Perceptron.
It did not do much. As Rosenblatt demonstrated for reporters visiting the lab, if he showed the machine a few hundred rectangular cards, some marked on the left and some the right, it could learn to tell the difference between the two.
He said the system would one day learn to recognize handwritten words, spoken commands, and even people’s faces. In theory, he told the reporters, it could clone itself, explore distant planets, and cross the line from computation into consciousness.
When he died 13 years later, it could do none of that. But this was typical of AI research — an academic field created around the same time Rosenblatt went to work on the Perceptron.
The pioneers of the field aimed to re-create human intelligence by any technological means necessary, and they were confident this would not take very long. Some said a machine would beat the world chess champion and discover its own mathematical theorem within the next decade. That did not happen, either.
The research produced some notable technologies, but they were nowhere close to reproducing human intelligence. “Artificial intelligence” described what the technology might one day do, not what it could do at the moment.
Why they believe
In 2020, OpenAI unveiled a system called GPT-3. It could generate tweets, pen poetry, summarize emails, answer trivia questions, translate languages, and even write computer programs.
Sam Altman, a 37-year-old entrepreneur and investor who leads OpenAI as CEO, believes this and similar systems are intelligent. “They can complete useful cognitive tasks,” Altman told me on a recent morning. “The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.”
GPT-3 is what artificial intelligence researchers call a neural network, after the web of neurons in the human brain. That, too, is aspirational language. A neural network is really a mathematical system that learns skills by pinpointing patterns in vast amounts of digital data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat.
This is the same technology that Rosenblatt explored in the 1950s. He did not have the vast amounts of digital data needed to realize this big idea. Nor did he have the computing power needed to analyze all that data. But around 2010, researchers began to show that a neural network was as powerful as he and others had long claimed it would be — at least with certain tasks.
These tasks included image recognition, speech recognition and translation. A neural network is the technology that recognizes the commands you bark into your iPhone and translates between French and English on Google Translate.
More recently, researchers at places such as Google and OpenAI began building neural networks that learned from enormous amounts of prose, including digital books and Wikipedia articles by the thousands. GPT-3 is an example.
As it analyzed all that digital text, it built what you might call a mathematical map of human language. Using this map, it can perform many different tasks, such as penning speeches, writing computer programs and having a conversation.
But there are endless caveats. Using GPT-3 is like rolling the dice: If you ask it for 10 speeches in the voice of Donald Trump, it might give you five that sound remarkably like the former president — and five others that come nowhere close. Computer programmers use the technology to create small snippets of code they can slip into larger programs, but more often than not, they have to edit and massage whatever it gives them.
Even after we discussed these flaws, Altman described this kind of system as intelligent. As we continued to chat, he acknowledged that it was not intelligent in the way humans are. “It is like an alien form of intelligence,” he said. “But it still counts.”
Why everyone else believes
In the mid-1960s, a researcher at the Massachusetts Institute of Technology, Joseph Weizenbaum, built an automated psychotherapist he called Eliza. This chatbot was simple. Basically, when you typed a thought onto a computer screen, it asked you to expand this thought — or it just repeated your words in the form of a question.
But much to Weizenbaum’s surprise, people treated Eliza as if it were human. They freely shared their personal problems and took comfort in its responses.
“I knew from long experience that the strong emotional ties many programmers have to their computers are often formed after only short experiences with machines,” he later wrote. “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
We humans are susceptible to these feelings. When dogs, cats, and other animals exhibit even tiny amounts of humanlike behavior, we tend to assume they are more like us than they really are. Much the same happens when we see hints of human behavior in a machine.
Scientists now call it the Eliza effect.
Much the same thing is happening with modern technology.
Where the robots will take us
Margaret Mitchell worries what all this means for the future.
As a researcher at Microsoft, then Google, where she helped found its AI ethics team, and now Hugging Face, another prominent research lab, she has seen the rise of this technology firsthand. Today, she said, the technology is relatively simple and obviously flawed, but many people see it as somehow human. What happens when the technology becomes far more powerful?
Some in the community of AI researchers worry that these systems are on their way to sentience or consciousness. But this is beside the point.
“A conscious organism — like a person or a dog or other animals — can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before,” said Allen, the University of Pittsburgh professor. “This technology is nowhere close to doing that.”
There are far more immediate — and more real — concerns.
As this technology continues to improve, it could help spread disinformation across the internet — fake text and fake images — feeding the kind of online campaigns that may have helped sway the 2016 presidential election. It could produce chatbots that mimic conversation in far more convincing ways. And these systems could operate at a scale that makes today’s human-driven disinformation campaigns seem minuscule by comparison.
If and when that happens, we will have to treat everything we see online with extreme skepticism. But Mitchell wonders if we are up to the challenge.
“I worry that chatbots will prey on people,” she said. “They have the power to persuade us what to believe and what to do.”
Read more Technology
Jordan News