As it has been the case from the very beginning, the IT
jargon keeps getting richer and richer all the time, with new terms and
expressions added almost weekly.
اضافة اعلان
Keeping up with the trend is not easy. Searching the web or
perusing Wikipedia articles does not always provide the simple, uncomplicated
explanation that the layperson is looking for, or expecting. Some terms are
harder to comprehend than others, often because they are very new and
inherently complex, whereas others are easier.
(Photo: Envato Elements)
For example, the expression artificial intelligence (AI) has
become part of the vocabulary that everybody must have acquired and memorized
at this point in time, to be able to participate in any discussion about
information technology, or to read and understand data pertaining to the
subject. “Everybody” means even those who are not particularly tech minded.
With the concept that was introduced theoretically to the
wide public in the early 1980s, AI-based techniques and algorithms are now very
real and are included and put to good use in an increasing number of
applications that we depend on every day, from online banking to online
shopping, messaging, advanced gaming and weather forecast.
They significantly contribute to making software more
efficient, more “intelligent” in a way, as the expression implies. The
practical result, for us, users, is software and devices that are easier to use
and that offer more functionality, more power in the end.
Much newer, and perhaps more difficult to grasp, are the
generative adversarial networks (GAN). Admitted, the sound of the sophisticated
expression alone is guaranteed to make you the center of attraction, the focus
of the discussion, if you happen to say it out loud at a casual evening
gathering with friends.
Seeking the assistance of Wikipedia, the first source that
comes to mind, will not really help here, for the otherwise well-written
article itself is full of sweet things like “neural networks” and
“interpretable machine learning”. It is essentially penned for IT
professionals.
GANs are methods that use AI, interacting with advanced
types of networks, to “learn” how human processes operate through a large
number of repetitions (the very essence of learning methods of all kinds), so
as to be able in the end to mimic human action and generate results that a
person would be able to come up with.
For instance, a GAN system could analyze a very large number
of actual photographs, portraits for example, learn from them, and then create
a very realistic photograph of a person that does not really exist. It would be
a virtual portrait, practically impossible to tell that it was digitally created
and not real.
Other applications consist of digitally reading a text like
a poem, and then composing the music that would go with it nicely, taking into
consideration the meaning of the lyrics, the rhythm of the words used, the
context, the style, etc.
So, in a nutshell, and to put it again in layman’s terms,
GANs are advanced programming methods that tap on a combination of advanced
networking and AI to generate results that were thought possible to obtain only
by humans so far.
Distributed cloud (DC) is another buzzword. Whereas Cloud
goes without saying these days, DC refers to cloud services that are not in one
single geographic location, but in several. Therefore, the data you may have
saved through the cloud service, or the cloud application you are using, will
be “distributed” or scattered over more than one server, anywhere in the world.
For the users, this is transparent, as they have nothing
specific to do but to use the service. The distribution is done by the
provider, to optimize the service, for increased safety and security, and/or to
leverage the specific expertise of each location. The result, simply, is better
and more efficient service.
It is difficult not to mention voice-as-user interface (VUI)
in this story, another IT expression that is hard to ignore, especially in
mobile applications.
According to datapine.com, “20 percent of mobile queries are
currently performed by voice search while 72 percent of people claim it has
become a part of their daily routines”.
Improvement in voice recognition technology is a solid fact
and has brought us Amazon Alexa, Siri, Bixby and Google Assistant. Whether at
work or at home, we tend to use voice instead of keyboard and mouse to
interface with technology. Voice has become the preferred interface in
countless applications and in most cases.
One thing is certain: with time, the IT-specific glossary is
bound to grow without a limit. We just have to keep with its growth.
This writer is a computer engineer and a classically
trained pianist and guitarist. He has been regularly writing IT articles,
reviewing music albums, and covering concerts for more than 30 years.
Read more Opinion and Analysis