The past few weeks have felt like a
honeymoon phase for our relationship with tools powered by artificial
intelligence.
Many of us have prodded
ChatGPT, a chatbot
that can generate responses with startlingly natural language, with tasks like
writing stories about our pets, composing business proposals, and coding
software programs.
اضافة اعلان
At the same time, many have uploaded
selfies to Lensa AI, an app that uses algorithms to transform ordinary photos
into artistic renderings. Both debuted a few weeks ago.
Like smartphones and social networks when
they first emerged, AI feels fun and exciting. Yet (and I am sorry to be a
buzzkill), as is always the case with new technology, there will be drawbacks,
painful lessons, and unintended consequences.
People experimenting with ChatGPT were
quick to realize that they could use the tool to win coding contests. Teachers
have already caught their students using the bot to plagiarize essays. And some
women who uploaded their photos to Lensa received back renderings that felt
sexualized and made them look skinnier, younger, or even nude.
We have reached a turning point with
artificial intelligence, and now is a good time to pause and assess: How can we
use these tools ethically and safely?
For years, virtual assistants like Siri and
Alexa, which also use AI, were the butt of jokes because they were not
particularly helpful. But modern AI is just good enough now that many people
are seriously contemplating how to fit the tools into their daily lives and
occupations.
“We’re at the beginning of a broader
societal transformation,” said Brian Christian, a computer scientist and the
author of “The Alignment Problem”, a book about the ethical concerns
surrounding AI systems. “There’s going to be a bigger question here for
businesses, but in the immediate term, for the education system, what is the
future of homework?”
With careful thought and consideration, we
can take advantage of the smarts of these tools without causing harm to
ourselves or others.
Understand the limits (and consequences)
First, it is important to understand how
the technology works to know what exactly you are doing with it.
ChatGPT is essentially a more powerful,
fancier version of the predictive text system on our phones, which suggests
words to complete a sentence when we are typing by using what it has learned
from vast amounts of data scraped off the web.
We have reached a turning point with artificial intelligence, and now is a good time to pause and assess: How can we use these tools ethically and safely?
It also cannot check if what it is saying
is true.
If you use a chatbot to code a program, it
looks at how the code was compiled in the past. Because code is constantly
updated to address security vulnerabilities, the code written with a chatbot
could be buggy or insecure, Christian said.
Likewise, if you are using ChatGPT to write
an essay about a classic book, chances are that the bot will construct
seemingly plausible arguments. But if others published a faulty analysis of the
book on the web, that may also show up in your essay. If your essay was then
posted online, you would be contributing to the spread of misinformation.
“They can fool us into thinking that they
understand more than they do, and that can cause problems,” said Melanie
Mitchell, an AI researcher at the Santa Fe Institute.
OpenAI, the company behind ChatGPT,
declined to comment for this column.
Similarly, AI-powered image-editing tools
like Lensa train their algorithms with existing images on the web. Therefore,
if women are presented in more sexualized contexts, the machines will re-create
that bias, Mitchell said.
Prisma Labs, the developer of Lensa, said
it was not consciously applying biases — it was just using what was out there.
“Essentially, AI is holding a mirror to our society,” said Anna Green, a Prisma
spokesperson.
A related concern is that if you use the
tool to generate a cartoon avatar, it will base the image on the styles of
artists’ published work without compensating them or giving them credit.
Know what you are giving upA lesson that we have learned again and
again is that when we use an online tool, we have to give up some data, and AI
tools are no exception.
When asked whether it was safe to share
sensitive texts with ChatGPT, the chatbot responded that it did not store your
information but that it would probably be wise to exercise caution.
“They can fool us into thinking that they understand more than they do, and that can cause problems.”
Prisma Labs said that it solely used photos
uploaded to Lensa for creating avatars, and that it deleted images from its
servers after 24 hours. Still, photos that you want to keep private should
probably not be uploaded to Lensa.
“You’re helping the robots by giving them
exactly what they need in order to create better models,” said Evan Greer, a
director for Fight for the Future, a digital rights advocacy group. “You should
assume it can be accessed by the company.”
Use them to improve, not do, your workWith that in mind, AI can be helpful if we
are looking for a light assist. A person could ask a chatbot to rewrite a
paragraph in an active voice. A non-native English speaker could ask ChatGPT to
remove grammatical errors from an email before sending it. A student could ask
the bot for suggestions on how to make an essay more persuasive.
But in any situation like those, do not
blindly trust the bot.
“You need a human in the loop to make sure
that they’re saying what you want them to say and that they’re true things
instead of false things,” Mitchell said.
And if you do decide to use a tool like
ChatGPT or Lensa to produce a piece of work, consider disclosing that it was
used, she added. That would be similar to giving credit to other authors for
their work.
Disclosure: The ninth paragraph of this
column was edited by ChatGPT (although the entire column was written and
fact-checked by humans).
Read more Technology
Jordan News