In the battle with robots, human workers are winning
Farhad Manjoo, New York Times
last updated: Oct 09,2022
Why do I still have a
job?
It is a question readers ask me often, but I mean it more universally: Why do so many of us still have jobs?
It is 2022, and computers keep stunning us with their achievements. Artificial intelligence systems are writing, drawing, creating videos, diagnosing diseases, dreaming up new molecules for medicine and doing much more to make their parents very proud. Yet somehow we, sacks of meat, though prone to exhaustion, distraction, injury and sometimes spectacular error, remain in high demand.
How did this happen? Were not humans supposed to have been replaced by now — or at least severely undermined by the indefatigable go-getter robots that were said to be gunning for our jobs?
I have been thinking about this a lot recently. In part it is because I was among the worriers — I started warning about the coming robotic threat to human employment in 2011. As the decade progressed and artificial intelligence systems began to surpass even their inventors’ expectations, evidence for the danger seemed to pile up. In 2013, a study by an Oxford economist and an AI scientist estimated that 47 percent of jobs are “at risk” of being replaced by computers. In 2017, the McKinsey Global Institute estimated that automation could displace hundreds of millions of workers by 2030, and global economic leaders were discussing what to do about the “robocalypse”. In the 2020 campaign, AI’s threat to employment became a topic of presidential debates.
Even then, predictions of robot dominance were not quite panning out, but the pandemic and its aftermath ought to radically shift our thinking. Now, as central bankers around the world are rushing to cool labor markets and tame inflation — a lot of policymakers are hoping that this week’s employment report shows declining demand for new workers — a few economic and technological truths have become evident.
First, humans have been underestimated. It turns out that we (well, many of us) are really amazing at what we do, and for the foreseeable future we are likely to prove indispensable across a range of industries, especially column-writing. Computers, meanwhile, have been overestimated. Though machines can look indomitable in demonstrations, in the real world AI has turned out to be a poorer replacement for humans than its boosters have prophesied.
What is more, the entire project of pitting AI against people is beginning to look pretty silly, because the likeliest outcome is what has pretty much always happened when humans acquire new technologies — the technology augments our capabilities rather than replaces us. Is “this time different”, as many Cassandras took to warning over the past few years? It is looking like not. Decades from now I suspect we will have seen that artificial intelligence and people are like peanut butter and jelly: better together.
It was a recent paper by Michael Handel, a sociologist at the Bureau of Labor Statistics, that helped me clarify the picture. Handel has been studying the relationship between technology and jobs for decades, and he has been skeptical of the claim that technology is advancing faster than human workers can adapt to the changes. In the recent analysis, he examined long-term employment trends across more than two dozen job categories that technologists have warned were particularly vulnerable to automation. Among these were financial advisers, translators, lawyers, doctors, fast-food workers, retail workers, truck drivers, journalists and, poetically, computer programmers.
Consider radiologists, high-paid medical doctors who undergo years of specialty training to diagnose diseases through imaging procedures like X-rays and MRIs. As a matter of technology, what radiologists do looks highly susceptible to automation. Machine learning systems have made computers very good at this sort of task; if you feed a computer enough chest X-rays showing diseases, for instance, it can learn to diagnose those conditions — often faster and with accuracy rivaling or exceeding that of human doctors.
Such developments once provoked alarm in the field. In 2016, an article in The Journal of the American College of Radiology warned that machine learning “could end radiology as a thriving speciality”. The same year, Geoffrey Hinton, one of the originators of machine learning, said that “people should stop training radiologists now” because it was “completely obvious that within five years deep learning is going to be better than radiologists”.
Hinton later added that it could take 10 years, so he may still prove correct — but Handel points out that the numbers are not looking good for him. Rather than dying as an occupation, radiology has seen steady growth; between 2000 and 2019, the number of radiologists whose main activity was patient care grew by an average of about 15 percent per decade, Handel found. Some in the field are even worried about a looming shortage of radiologists that will result in longer turnaround times for imaging diagnoses.
How did radiologists survive the AI invasion? In a 2019 paper in the journal Radiology Artificial Intelligence, Curtis Langlotz, a radiologist at Stanford, offered a few reasons. One is that humans still routinely outperform machines — even if computers can get very good at spotting certain kinds of diseases, they may lack data to diagnose rarer conditions that human experts with experience can easily spot. Radiologists are also adaptable; technological advances (like CT scans and MRIs) have been common in the field, and one of the primary jobs of a human radiologist is to understand and protect patients against the shortcomings of technologies used in the practice. Other experts have pointed to the complications of the health care industry — questions about insurance, liability, patient comfort, ethics and business consolidation may be just as important to the rollout of a new technology as its technical performance.
Langlotz concluded that “Will AI replace radiologists?” is “the wrong question.” Instead, he wrote, “The right answer is: Radiologists who use AI will replace radiologists who don’t.”
Similar trends have played out in lots of other jobs thought to vulnerable to AI. Will truck drivers be outmoded by self-driving trucks? Perhaps someday, but as The New York Times’ AI reporter Cade Metz recently pointed out, the technology is perpetually just a few years away from being ready and is “a long way from the moment trucks can drive anywhere on their own”. No wonder, then, the end of the road for truck drivers is nowhere near — the government projects that the number of truck-driving jobs will grow over the next decade.
How about fast-food workers, who were said to be replaceable by robotic food-prep machines and self-ordering kiosks? They are safe too, Chris Kempczinski, the CEO of McDonald’s, said in an earnings call this summer. Even with a shortage of fast-food workers, robots “may be great for garnering headlines” but are simply “not practical for the vast majority of restaurants,” he said.
It is possible, even likely, that all of these systems will improve. But there is no evidence it will happen overnight, or quickly enough to result in catastrophic job losses in the short term.
“I don’t want to minimize the pain and adjustment costs for people who are impacted by technological change,” Handel told me. “But when you look at it, you just don’t see a lot — you just don’t see anything as much as being claimed.”
Read more Opinion and Analysis
Jordan News
اضافة اعلان
It is a question readers ask me often, but I mean it more universally: Why do so many of us still have jobs?
It is 2022, and computers keep stunning us with their achievements. Artificial intelligence systems are writing, drawing, creating videos, diagnosing diseases, dreaming up new molecules for medicine and doing much more to make their parents very proud. Yet somehow we, sacks of meat, though prone to exhaustion, distraction, injury and sometimes spectacular error, remain in high demand.
How did this happen? Were not humans supposed to have been replaced by now — or at least severely undermined by the indefatigable go-getter robots that were said to be gunning for our jobs?
I have been thinking about this a lot recently. In part it is because I was among the worriers — I started warning about the coming robotic threat to human employment in 2011. As the decade progressed and artificial intelligence systems began to surpass even their inventors’ expectations, evidence for the danger seemed to pile up. In 2013, a study by an Oxford economist and an AI scientist estimated that 47 percent of jobs are “at risk” of being replaced by computers. In 2017, the McKinsey Global Institute estimated that automation could displace hundreds of millions of workers by 2030, and global economic leaders were discussing what to do about the “robocalypse”. In the 2020 campaign, AI’s threat to employment became a topic of presidential debates.
Even then, predictions of robot dominance were not quite panning out, but the pandemic and its aftermath ought to radically shift our thinking. Now, as central bankers around the world are rushing to cool labor markets and tame inflation — a lot of policymakers are hoping that this week’s employment report shows declining demand for new workers — a few economic and technological truths have become evident.
First, humans have been underestimated. It turns out that we (well, many of us) are really amazing at what we do, and for the foreseeable future we are likely to prove indispensable across a range of industries, especially column-writing. Computers, meanwhile, have been overestimated. Though machines can look indomitable in demonstrations, in the real world AI has turned out to be a poorer replacement for humans than its boosters have prophesied.
What is more, the entire project of pitting AI against people is beginning to look pretty silly, because the likeliest outcome is what has pretty much always happened when humans acquire new technologies — the technology augments our capabilities rather than replaces us. Is “this time different”, as many Cassandras took to warning over the past few years? It is looking like not. Decades from now I suspect we will have seen that artificial intelligence and people are like peanut butter and jelly: better together.
It was a recent paper by Michael Handel, a sociologist at the Bureau of Labor Statistics, that helped me clarify the picture. Handel has been studying the relationship between technology and jobs for decades, and he has been skeptical of the claim that technology is advancing faster than human workers can adapt to the changes. In the recent analysis, he examined long-term employment trends across more than two dozen job categories that technologists have warned were particularly vulnerable to automation. Among these were financial advisers, translators, lawyers, doctors, fast-food workers, retail workers, truck drivers, journalists and, poetically, computer programmers.
In 2013, a study by an Oxford economist and an AI scientist estimated that 47 percent of jobs are “at risk” of being replaced by computers. In 2017, the McKinsey Global Institute estimated that automation could displace hundreds of millions of workers by 2030, and global economic leaders were discussing what to do about the “robocalypse”. In the 2020 campaign, AI’s threat to employment became a topic of presidential debates.His upshot: Humans are pretty handily winning the job market. Job categories that a few years ago were said to be doomed by AI are doing just fine. Data shows “little support” for “the idea of a general acceleration of job loss or a structural break with trends predating the AI revolution”, Handel writes.
Consider radiologists, high-paid medical doctors who undergo years of specialty training to diagnose diseases through imaging procedures like X-rays and MRIs. As a matter of technology, what radiologists do looks highly susceptible to automation. Machine learning systems have made computers very good at this sort of task; if you feed a computer enough chest X-rays showing diseases, for instance, it can learn to diagnose those conditions — often faster and with accuracy rivaling or exceeding that of human doctors.
Such developments once provoked alarm in the field. In 2016, an article in The Journal of the American College of Radiology warned that machine learning “could end radiology as a thriving speciality”. The same year, Geoffrey Hinton, one of the originators of machine learning, said that “people should stop training radiologists now” because it was “completely obvious that within five years deep learning is going to be better than radiologists”.
Hinton later added that it could take 10 years, so he may still prove correct — but Handel points out that the numbers are not looking good for him. Rather than dying as an occupation, radiology has seen steady growth; between 2000 and 2019, the number of radiologists whose main activity was patient care grew by an average of about 15 percent per decade, Handel found. Some in the field are even worried about a looming shortage of radiologists that will result in longer turnaround times for imaging diagnoses.
How did radiologists survive the AI invasion? In a 2019 paper in the journal Radiology Artificial Intelligence, Curtis Langlotz, a radiologist at Stanford, offered a few reasons. One is that humans still routinely outperform machines — even if computers can get very good at spotting certain kinds of diseases, they may lack data to diagnose rarer conditions that human experts with experience can easily spot. Radiologists are also adaptable; technological advances (like CT scans and MRIs) have been common in the field, and one of the primary jobs of a human radiologist is to understand and protect patients against the shortcomings of technologies used in the practice. Other experts have pointed to the complications of the health care industry — questions about insurance, liability, patient comfort, ethics and business consolidation may be just as important to the rollout of a new technology as its technical performance.
Langlotz concluded that “Will AI replace radiologists?” is “the wrong question.” Instead, he wrote, “The right answer is: Radiologists who use AI will replace radiologists who don’t.”
Similar trends have played out in lots of other jobs thought to vulnerable to AI. Will truck drivers be outmoded by self-driving trucks? Perhaps someday, but as The New York Times’ AI reporter Cade Metz recently pointed out, the technology is perpetually just a few years away from being ready and is “a long way from the moment trucks can drive anywhere on their own”. No wonder, then, the end of the road for truck drivers is nowhere near — the government projects that the number of truck-driving jobs will grow over the next decade.
How about fast-food workers, who were said to be replaceable by robotic food-prep machines and self-ordering kiosks? They are safe too, Chris Kempczinski, the CEO of McDonald’s, said in an earnings call this summer. Even with a shortage of fast-food workers, robots “may be great for garnering headlines” but are simply “not practical for the vast majority of restaurants,” he said.
It is possible, even likely, that all of these systems will improve. But there is no evidence it will happen overnight, or quickly enough to result in catastrophic job losses in the short term.
“I don’t want to minimize the pain and adjustment costs for people who are impacted by technological change,” Handel told me. “But when you look at it, you just don’t see a lot — you just don’t see anything as much as being claimed.”
Read more Opinion and Analysis
Jordan News