Italy has become the first Western country
to block the popular artificial intelligence bot ChatGPT. Italian authorities
did not block the AI software because the technology is
advancing too quickly and becoming too powerful. Instead, the Italian data
protection authority blocked the application over privacy concerns and
questions over ChatGPT’s compliance with the European Union’s General Data
Protection Regulation. The event made international headlines and tapped into
deeper global fears that AI is getting too powerful.
اضافة اعلان
Many of us cannot comprehend how the
technology has developed so quickly. One reason is that there have been
few guardrails from a regulatory standpoint to keep tabs on the growth of AI.
Humanity needs guardrails in place, but that is much easier said than
done.
The regulation of AI is becoming increasingly
vital as the technology is being used more widely in areas such as healthcare,
finance, and transportation. According to a study by researchers at the
University of Pennsylvania and OpenAI, the privately-held company behind
ChatGPT, most jobs will be changed significantly by AI in the near future. For
nearly 20 percent of jobs in the study ranging from accountant to writer, at
least half of their tasks could be completed much faster with ChatGPT and
similar tools. While we do not know what this will do to the labor market, it
will have an unavoidably large impact that could have knock-on effects across
society.
The state of global AI regulationThere has been a slight movement towards
better regulation worldwide in the last decade. The European Union, in line
with its data protection standards, has developed a framework for AI regulation that includes rules for
high-risk AI applications, requirements for transparency and accountability,
and a ban on specific uses of AI, such as social scoring — the practice of
using the technology to rank people for their trustworthiness.
Many of us cannot comprehend how the technology has developed so quickly. One reason is that there have been few guardrails from a regulatory standpoint to keep tabs on the growth of AI.
The US has also slowly started to regulate
AI, with the National Institute of Standards and Technology developing a set
of principles for AI governance and the Federal Trade
Commission taking enforcement actions against companies that use AI in ways
that violate consumer protection laws.
Yet, these regulations are inadequate,
given the speed at which AI develops. The concern over AI’s growing power and
the lack of regulations has led some technology leaders to call for a pause on
AI development. Last month, according to The Guardian, more than 1,800 signatories, including Elon
Musk, the cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak,
called for a six-month pause on developing systems “more powerful” than that of
ChatGPT-4.
Musk was a co-founder of OpenAI and has
since expressed misgivings about how the company is run. The open letter states
that “AI labs and independent experts should use this pause to jointly develop
and implement a set of shared safety protocols for advanced AI design and
development that are rigorously audited and overseen by independent outside
experts.”
Bringing regulation up to speedIndependent experts are vital to sensible
AI regulation. This should not overshadow the fact that the sector has
developed with little oversight from government authorities. This is one reason
we are having an urgent conversation about its growing power.
In short, AI regulation needs to catch up to the rapid pace of technological development. AI’s economic, societal, and intellectual risks are not adequately addressed in the halls of power worldwide.
In short, AI regulation needs to catch up
to the rapid pace of technological development. AI’s economic, societal, and
intellectual risks are not adequately addressed in the halls of power
worldwide. The use of facial recognition technology, for example, has been
shown to be biased against certain groups but is still largely unregulated in
many countries, including the US. Similarly, using AI in automated
decision-making systems, such as credit scoring or employment screening, has
raised concerns about algorithmic bias and discrimination.
AI language models such as ChatGPT can
generate harmful or misleading content, such as hate speech, and can perpetuate
existing societal biases and inequalities. Then there is the issue of data
privacy, which Italy has zeroed in on. Language models require access to large
amounts of personal data to function effectively. Should individuals hand over
their data? If so, under what terms? These are serious questions that do not
have answers right now because of the absence of sound regulations. Do you know
how much of your data is being fed into a large AI language model as you read
this piece? I certainly do not, and that is part of the problem.
A smaller-scale solution?The ever-present danger with regulation, on
the other hand, is that it could stifle innovation. Private companies across
Silicon Valley love this line of argumentation because it changes the
parameters of debate. They say that more research is needed to thoroughly
understand AI’s risks and benefits. Moreover, AI regulation requires input from
various stakeholders, including government agencies, industry leaders, and
civil society organizations. Getting various stakeholders to come together and
investigate how AI is changing society is not easy. Giving researchers more
time to evaluate AI risk can happen while regulations are being written. These
things do not have to be mutually exclusive.
Given the lack of movement in the world’s most powerful countries on AI regulation, it is time for small countries to ratify their own regulations.
Given the lack of movement in the world’s
most powerful countries on AI regulation, it is time for small countries to
ratify their own regulations. Technology-heavy economies from Estonia to the
United Arab Emirates have the knowledge base to draft sensible regulations and
see how they affect AI development. These countries are also small enough that
the regulations can be updated and amended as the technology evolves. However,
we look at it, it is time for AI guardrails.
Read more Opinion and Analysis
Jordan News