Last November, when ChatGPT was released, many schools felt
as if they’d been hit by an asteroid.
In the middle of an academic year, with no warning, teachers
were forced to confront the new, alien-seeming technology, which allowed
students to write college-level essays, solve challenging problem sets and ace
standardized tests.
اضافة اعلان
Some schools responded — unwisely, I argued at the time — by
banning ChatGPT and tools like it. But those bans didn’t work, in part because
students could simply use the tools on their phones and home computers. And as
the year went on, many of the schools that restricted the use of generative
artificial intelligence — as the category that includes ChatGPT, Bing, Bard and
other tools is called — quietly rolled back their bans.
Ahead of this school year, I talked with numerous K-12
teachers, school administrators and university faculty members about their
thoughts on AI now. There is a lot of confusion and panic, but also a fair bit
of curiosity and excitement. Mainly, educators want to know: How do we actually
use this stuff to help students learn, rather than just try to catch them
cheating?
I’m a tech columnist, not a teacher, and I don’t have all
the answers, especially when it comes to the long-term effects of AI on
education. But I can offer some basic, short-term advice for schools trying to
figure out how to handle generative AI this fall.
First, I encourage educators — especially in high schools
and colleges — to assume that 100 percent of their students are using ChatGPT
and other generative AI tools on every assignment, in every subject, unless
they’re being physically supervised inside a school building.
At most schools, this won’t be completely true. Some
students won’t use AI because they have moral qualms about it, because it’s not
helpful for their specific assignments, because they lack access to the tools,
or because they’re afraid of getting caught.
But the assumption that everyone is using AI outside class
might be closer to the truth than many educators realize. (“You have no idea
how much we’re using ChatGPT,” read the title of a recent essay by a Columbia undergraduate
in the Chronicle of Higher Education.) And it’s a helpful shortcut for teachers
trying to figure out how to adapt their teaching methods. Why would you assign
a take-home exam, or an essay on “Jane Eyre,” if everyone in class — except,
perhaps, the most strait-laced rule-followers — will use AI to finish it? Why
wouldn’t you switch to proctored exams, blue-book essays, and in-class group
work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat
among your students?
Second, schools should stop relying on AI detector programs
to catch cheaters. There are dozens of these tools on the market now, all
claiming to spot writing that was generated with AI, and none of them work
reliably well. They generate lots of false positives, and can be easily fooled
by techniques like paraphrasing. Don’t believe me? Ask OpenAI, the maker of
ChatGPT, which discontinued its AI writing detector this year because of a “low
rate of accuracy.”
It’s possible that in the future, AI companies may be able to
label their models’ outputs to make them easier to spot — a practice known as
“watermarking” — or that better AI detection tools may emerge. But for now,
most AI text should be considered undetectable, and schools should spend their
time (and technology budgets) elsewhere.
My third piece of advice — and the one that might get me the
most angry emails from teachers — is that teachers should focus less on warning
students about the shortcomings of generative AI than figuring out what the
technology does well.
Last year, many schools tried to scare students away from
using AI by telling them that tools like ChatGPT are unreliable, prone to
spitting out nonsensical answers and generic-sounding prose. These criticisms,
while true of early AI chatbots, are less true of today’s upgraded models, and
clever students are figuring out how to get better results by giving the models
more sophisticated prompts.
As a result, students at many schools are racing ahead of
their instructors when it comes to understanding what generative AI can do, if
used correctly. And the warnings about flawed AI systems issued last year may
ring hollow this year, now that GPT-4 is capable of getting passing grades at
Harvard.
Alex Kotran, the chief executive of the AI Education
Project, a nonprofit that helps schools adopt AI, told me that teachers needed
to spend time using generative AI themselves to appreciate how useful it could
be — and how quickly it’s improving.
“For most people, ChatGPT is still a party trick,” he said.
“If you don’t really appreciate how profound of a tool this is, you’re not
going to take all the other steps that are going to be required.”
There are resources for educators who want to bone up on AI
in a hurry. Kotran’s organization has a number of AI-focused lesson plans
available for teachers, as does the International Society for Technology in
Education. Some teachers have also begun assembling recommendations for their
peers, such as a website made by faculty at Gettysburg College that provides
practical advice on generative AI for professors.
In my experience, though, there is no substitute for
hands-on experience. So I’d advise teachers to start experimenting with ChatGPT
and other generative AI tools themselves, with the goal of getting as fluent in
the technology as many of their students already are.
My last piece of advice for schools that are flummoxed by
generative AI is this: treat this year — the first full academic year of the
post-ChatGPT era — as a learning experience, and don’t expect to get everything
right.
There are many ways AI could reshape the classroom. Ethan
Mollick, a professor at the University of Pennsylvania’s Wharton School, thinks
the technology will lead more teachers to adopt a “flipped classroom” — having
students learn material outside of class, and practice it in class — which has
the advantage of being more resistant to AI cheating. Other educators I spoke
with said they were experimenting with turning generative AI into a classroom
collaborator, or a way for students to practice their skills at home with the
help of a personalized AI tutor.
Some of these experiments won’t work. Some will. That’s OK.
We’re all still adjusting to this strange new technology in our midst, and the
occasional stumble is to be expected.
But students need guidance when it comes to generative AI,
and schools that treat it as a passing fad — or an enemy to be vanquished —
will miss an opportunity to help them.
“A lot of stuff’s going to break,” Mollick said. “And so we
have to decide what we’re doing, rather than fighting a retreat against the
AI.”
Read more Education
Jordan News