Plato mourned the invention of the alphabet, worried that
the use of text would threaten traditional memory-based arts of rhetoric. In
his “Dialogues,” arguing through the voice of Thamus, the Egyptian king of the
gods, Plato claimed the use of this more modern technology would create
“forgetfulness in the learners’ souls, because they will not use their
memories,” that it would impart “not truth but only the semblance of truth” and
that those who adopt it would “appear to be omniscient and will generally know
nothing,” with “the show of wisdom without the reality.”
اضافة اعلان
If Plato were alive today, would he say similar things about
ChatGPT?
ChatGPT, a conversational artificial intelligence program
released recently by OpenAI, is not just another entry in the artificial
intelligence hype cycle. It is a significant advancement that can produce
articles in response to open-ended questions that are comparable to good high
school essays.
It is in high schools and even college where some of
ChatGPT’s most interesting and troubling aspects will become clear.
Essay writing is most often assigned not because the result
has much value — proud parents putting good grades on the fridge aside — but
because the process teaches crucial skills: researching a topic, judging
claims, synthesizing knowledge, and expressing it in a clear, coherent and
persuasive manner. Those skills will be even more important because of advances
in AI.
When I asked ChatGPT a range of questions — about the
ethical challenges faced by journalists who work with hacked materials, the
necessity of cryptocurrency regulation, the possibility of democratic
backsliding in the US — the answers were cogent, well-reasoned and clear. It is
also interactive: I could ask for more details or request changes.
But then, on trickier topics or more complicated concepts,
ChatGPT sometimes gave highly plausible answers that were flat-out wrong —
something its creators warn about in their disclaimers.
Unless you already knew the answer or were an expert in the
field, you could be subjected to a high-quality intellectual snow job.
You would face, as Plato predicted, “the show of wisdom
without the reality”.
All this, however, does not mean ChatGPT — or similar tools,
because it is not the only one of its kind — cannot be a useful tool in
education.
Schools have already been dealing with the internet’s wealth
of knowledge, along with its lies, misleading claims, and essay mills.
One way has been to change how they teach. Rather than
listen to a lecture in class and then go home to research and write an essay,
students listen to recorded lectures and do research at home, then write essays
in class, with supervision, even collaboration with peers and teachers. This
approach is called flipping the classroom.
In flipped classrooms, students would not use ChatGPT to
conjure up a whole essay. Instead, they would use it as a tool to generate
critically examined building blocks of essays. It would be similar to how
students in advanced math classes are allowed to use calculators to solve
complex equations without replicating tedious, previously mastered steps.
Teachers could assign a complicated topic and allow students
to use such tools as part of their research. Assessing the veracity and
reliability of these AI-generated notes and using them to create an essay would
be done in the classroom, with guidance and instruction from teachers. The goal
would be to increase the quality and the complexity of the argument.
This would require more teachers to provide detailed
feedback. Unless sufficient resources are provided equitably, adapting to
conversational AI in flipped classrooms could exacerbate inequalities.
In schools with fewer resources, some students may end up
turning in AI-produced essays without obtaining useful skills or really knowing
what they have written. “Not truth but only the semblance of truth,” as Plato
said.
Some school officials may treat this as a problem of merely
plagiarism detection and expand the use of draconian surveillance systems.
During the pandemic, many students were forced to take tests or write essays
under the gaze of an automated eye-tracking system or on a locked-down computer
to prevent cheating.
If AI enhances the value of education for some while degrading the education of others, the promise of betterment will be broken.
In a fruitless arms race against conversational AI,
automated plagiarism software may become supercharged, making school more
punitive for monitored students. Worse, such systems will inevitably produce
some false accusations, which damage trust and may even stymie the prospects of
promising students.
Educational approaches that treat students like enemies may
teach students to hate or subvert the controls. That’s not a recipe for human
betterment.
While some students lag, advanced AI will create a demand
for other advanced skills. Nobel laureate Herbert Simon noted in 1971 that as
information became overwhelming, the value of our attention grew. “A wealth of
information creates a poverty of attention,” as he put it. Similarly, the
ability to discern truth from the glut of plausible-sounding but profoundly
incorrect answers will be precious.
Already, Stack Overflow, a widely used website where
programmers ask one another coding-related questions, banned ChatGPT answers
because too many of them were hard-to-spot nonsense.
Why rely on it at all, then?
At a minimum, because it will soon transform many
occupations. The right approach when faced with transformative technologies is
to figure out how to use them for the betterment of humanity.
Betterment has been a goal of public education for at least
the past 150 years. But while a high school diploma once led to a better job,
in the past few decades, the wages of high school graduates have greatly lagged
those of college graduates, fostering inequality.
If AI enhances the value of education for some while
degrading the education of others, the promise of betterment will be broken.
Plato erred by thinking that memory itself is a goal, rather
than a means for people to have facts at their call so they can make better
analyses and arguments. The Greeks developed many techniques to memorize poems
like the “Odyssey,” with its more than 12,000 lines. Why bother to force this
if you can have it all written down in books?
As Plato was wrong to fear the written word as the enemy, we
would be wrong to think we should resist a process that allows us to gather
information more easily.
As societies responded to previous technological advances,
like mechanization, by eventually enacting a public safety net, a shorter
workweek and a minimum wage, we will also need policies that allow more people
to live with dignity as a basic right, even if their skills have been
superseded. With so much more wealth generated now, we could unleash our
imagination even more, expanding free time and better working conditions for
more people.
The way forward is not to just lament supplanted skills, as
Plato did, but also to recognize that as more complex skills become essential,
our society must equitably educate people to develop them. And then it always
goes back to the basics. Value people as people, not just as bundles of skills.
And that is not something ChatGPT can tell us how to do.
Read more Opinion and Analysis
Jordan News