The News
Friday 22 of November 2024

The Artificial Intelligence Scare


The definition of intelligence is to achieve tasks, therefore the programming of a computer to perform such tasks earns it the definition of intelligent,photo: Flickr
The definition of intelligence is to achieve tasks, therefore the programming of a computer to perform such tasks earns it the definition of intelligent,photo: Flickr
To this day, AI has evolved and found its way to our daily lives and isn't going anywhere

APPS & BOTS

Research on Artificial Intelligence (AI) began in the context of war. During the late 40s and early 50s, scientists began programming computers to perform tasks that would simulate and automate human thought-process. The definition of intelligence is to achieve tasks, therefore the programming a computer to perform such tasks earns it the definition of “intelligent.”

To this day, AI has evolved and found its way to our daily lives, i.e. SIRI, and promises not to leave. The current stage at which AI is today is known as narrow or weak AI, which means that it will only carry out simple computing tasks such as speech recognition. It is narrow in the sense that it has only one task to perform.

The future of AI, however, will be to carry out more complex tasks that will perform beyond human capability. This is known as general or strong Artificial Intelligence (AGI). And this is where concerns arise: the sci-fi moment known as the singularity, when machines outsmart humans. What if they turn evil? What if they decide to wipe out the human race?

Experts point out that these kinds of scenarios will only be possible if our objectives are not properly aligned with the objectives set on AI.  The main motivation behind AI research is to have a positive and helpful effect on society, so we must properly align our goals with the goals set on AI. Easy.

But what if the goal of said human is to actually kill people? More and more, the conversation on AI revolves around the concept of autonomous weapons. Last year, an open letter by Steve Wozniak, Elon Musk, Stephen Hawking, Noam Chomsky, and several robotics and tech specialists called for a ban on this type of weapon.

The creation of super intelligent computers with the capability of auto-improvement calls for close examination of not only the purposefulness of their tasks but also of a legal framework to monitor them closely. In both areas what is yet to come has no precedent. Will the initial fears serve as a thought experiment to keep them under control? Or will they become omens of inevitable doom? We will soon find out.