With any groundbreaking new technology, the pace of
adoption climbs quickly. Over the past two decades, new platforms and tools,
from the iPhone to
TikTok, have seen progressively faster adoption rates. The
adoption rate of ChatGPT, the artificial intelligence (AI) large language model
owned by OpenAI,
is unlike anything we have ever seen. Within five days of release,
the platform had a million unique users.
اضافة اعلان
The spike in new users has led to a deluge of
thought pieces and discussions about the future of work in an
AI-powered world. The world has been turned upside down with claims that
ChatGPT signals the proper start of the AI age. The speedy embrace of AI tools is evident in
emerging markets, where companies and governments have almost tripped over
themselves to underscore their use of AI.
The glaring problem is that platforms like ChatGPT
are still in their infancy. While it is clear many are eager for the AI future
to take hold, the fact is that the technology is not anywhere close to
realizing accurate intelligence or reason.
Human capabilities and AI limitsThe human mind has an uncanny ability to use a
small amount of data to create thoughts, language, and the ability to reason.
Think about the development of language in a baby. Babies develop language with
a few cues from family and the surrounding environment.
While it is clear many are eager for the AI future to take hold, the fact is that the technology is not anywhere close to realizing accurate intelligence or reason.
This is a simplified take on language development,
but it is vital for understanding the limitations of ChatGPT when it comes to
mimicking human thought. Unlike humans, large
language models such as ChatGPT
analyze massive data sets and produce content based on guesses about trends in
the data it can access. The human mind could never process the data required to
make ChatGPT function, nor would it require so much data.
At the heart of OpenAI’s approach to ChatGPT is the
notion that human behavior is predictable. By analyzing large data sets,
OpenAI’s algorithm can essentially guess what we are thinking or looking for in
an answer.
Can machines really read our minds?While we might have a knee-jerk reaction to the
claim that human behavior is predictable, we must consider the effect of nearly
two decades of internet and smartphone usage. Our behavior is increasingly
predictable because we have allowed algorithms to shape what we read and digest
on the internet. Features like Google’s autocomplete in Gmail can accurately
predict what we want to write with a startling degree of accuracy. Moreover,
the direction of travel is evident as we spend more and more time on our phones.
When was the last time you visited a bookstore or a library and serendipitously
stumbled upon a book you did not know you were looking for?
“AI systems will have even greater potential to reinforce entire ideologies, world views, truths, and untruths, and to cement them or lock them in…”
While we might be becoming more predictable,
AI tools still lack the fundamental ability to reason, as the linguist Noam
Chomsky recently explained in the New York Times:
“Whereas humans are limited in the kinds of explanations we can rationally
conjecture, machine learning systems can learn both that the earth is flat and
that the earth is round. They trade merely in probabilities that change over
time. For this reason, the predictions of machine learning systems will always
be superficial and dubious.”
The balance of creativity and constraintThis should not undermine the incredible value of
ChatGPT and other AI tools. They use impressive computing power to harness our
increasing online predictability. Yet the overzealous embrace of these
technologies completely misses this vital point about human development and its
impact on society.
There are also many other ethical issues in the
quick embrace of these technologies. In its latest white paper for the launch of
ChatGPT-4,
OpenAI is quick to note that
“AI systems will have even greater potential to reinforce entire ideologies,
world views, truths, and untruths, and to cement them or lock them in,
foreclosing future contestation, reflection, and improvement.” This is scary
stuff, especially in light of recent developments like Microsoft’s dismissal of its entire AI ethics team and the general lack of ethics parameters in
AI development.
The truly beneficial future with AI tools lies in collaboration instead of a complete takeover or outsourcing.
The ethics debates will only grow in light of a
critical point raised by Chomsky that ChatGPT and other AI tools are
“constitutionally unable to balance creativity with constraint. They either
overgenerate (producing both truths and falsehoods, endorsing ethical and
unethical decisions alike) or undergenerate (exhibiting noncommitment to any
decisions and indifference to consequences).”
Collaborating with chatbotsThe truly beneficial future with AI tools lies
in collaboration instead of a complete takeover or outsourcing. These tools can help us
produce work of all kinds more efficiently, but they cannot replace the human
mind. The
iPhone, for example, facilitated better communication and made the
world more accessible but did not change the essence of how we work. ChatGPT will
not change the nature of work, either.
Companies and governments rushing to upend their
working models would be well served to pump the brakes and consider more incremental
AI incorporations. This will be challenging given how aggressively the
technology sector pushes ChatGPT, but the fad will eventually fade and be
replaced by another platform. We should use this opportunity as a society to
reflect on the nature of human intelligence and how it is being changed in the
technology age.
Read more Opinion and Analysis
Jordan News