How Artificial Intelligence Could Question Human Existance In The Near Future?

Understanding what is Artificial Intelligence?

What is AI
Image Source

Artificial intelligence is increasing rapidly. While science fiction often shows AI as robots with human-like characteristics.

Artificial intelligence is designed to make machines think like humans do.Learning from machine simply is the practice of using algorithms to collect data, learn from it, and then make a final report.Turing Test which is a test for intelligence in a computer, requiring that a human being should be unable to differentiate the machine from another human being by using the answers given.

Artificial intelligence today is often referred to as weak AI, which is designed to perform a simple task (eg: facial recognition or driving a car). The other kind of Artificial Intelligence is termed general AI, which is designed to think and solve problems just like humans. While much weaker AI may outshine humans at a particular task like playing chess or solving equations, Much advanced AI would outperform humans at nearly every task.

Robots can be autonomous or semi-autonomous machine applications of Artificial Intelligence which can perform without our commands.Robots could use their learning abilities to improve their autonomous functions. However, a robot could be designed with no self-learning capacities.

AI could help us in many fields like infrastructure, medical industries, smart classroom, also in space adventures.But we would broadly discuss the drawbacks of AI.

How AI could affect our existence?

Researchers agreed to the point that AI could not inhabit the human emotions like love, and that there is no reason to expect AI to become intentionally beloved or lovable to us. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

1) Assume that an AI could be programmed for doing something evil.In that case, artificial intelligence could be programmed to kill.Mass murder or torture could adequately happen if such machines are in the wrong hands, this could eventually lead to some AI war.if AI could be made to kill, then there remains a possibility that AI could be made indestructible or in such a way that it can never be turned off…in such case, it would question the human survival.

2) Assume that AI was given a task but it takes a destructible path to fulfill it. This could happen if we don’t programme AI according to our needs, which is an obviously difficult task to do.suppose we commanded an AI-oriented automobile to go as fast as possible to reach your desired destination..but instead, you reached your destination by avoiding traffic signals, hurting people on its way..thus creating a havoc on its way. A supersmart AI could be extremely efficient in completing its work but if it is not aligned with our needs, it could create problems

ALSO READ  5 Amazing Gadgets You’ll Observe On The International Space Station

In a paper published in the journal Science Robotics researchers Sandra Wachter, Brent Mittelstadt, and Luciano Floridi point out that controlling robots is extremely difficult. And as artificial intelligence becomes more widespread, it’s going to become a greater problem for society.

In 2015, Elon Musk donated $10 million to, as reported in Wired magazine, “to keep A.I. from turning evil.” Musk, Bill Gates and Stephen Hawking have all issued warnings of the dark side of Artificial Intelligence if we fail to control its development.

AI against humanity
Image Source

Consequences of wrong use of AI 

Artificial intelligence has scared many people throughout the globe through different science fiction movies like “The Matrix” etc.many such sci-fi movies revolves around the concept of “The Singularity“.It is the moment where the AIs become more intelligent and advanced than its creators.Though the situations differ through different movies the result remains same, end of humanity or the machines ruling us.

Several renowned scientists and physicist have spoken about their fear regarding AIs.Renowned physicist Stephen Hawking spoken out that he worries that AI would take over the earth and eradicate the human race.He says that if somehow robot become smarter than humans it could create their own weapons and rule over the earth.In 2014, he told the BBC that”It would take off on its own, and redesign itself at an ever-increasing rate”.“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Recently, Elon Musk, the futurist CEO of Tesla and SpaceX, at the 2017 National Governors Association Summer Meeting called AI”a fundamental risk to the existence of human civilization”.

Though, neither Hawking or Musk believes that the development of the AI should be stopped…but to ensure that these machines don’t go rogue.Musk said that”Normally, the way regulations are set up is a whole bunch of bad things happens, there’s a public outcry, and after many years, a regulatory agency is set up to regulate that industry”.Hawking believes that a globally governing body needs to regulate the development of AIs.

ALSO READ  Five Existing Mind-blowing Technologies That Most People Don't Know About Them

Russian President Vladimir Putin recently spoken about his fear at a meeting with Russian students in early September, where he said, “The one who becomes the leader in this sphere will be the ruler of the world.” These comments further strengthen Musk’s position — he tweeted out that the race for AI superiority is the “most likely cause of WW3.”

Quotes of some researchers regarding the drawbacks of AI and how it could affect our lives

According to Stuart Armstrong, a philosopher and Research Fellow at the Future of Humanity Institute at Oxford:

The first impact of Artificial Intelligence technology is near total unemployment. You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you’d get performance beyond what I’ve just described.

Daniel Dewey, a research fellow at the Future of Humanity Institute, builds on Armstrong’s train of thought in Aeon Magazine. After all, when and if humans do become obsolete, we’ll become little more than pebbles in a robot’s metaphorical shoes.

Armstrong said:-

The difference in intelligence between humans and chimpanzees is tiny. But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive. The basic problem is that the strong realization of most motivations is incompatible with human existence which Dewey told me. An AI might want to do certain things with the matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.You could give it a benevolent goal — something cuddly and utilitarian, like maximizing human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximize your happiness.

Mark Bishop, professor of cognitive computing at Goldsmiths, University of London, told The Independent:

I am particularly concerned by the potential military deployment of robotic weapons systems – systems that can take a decision to militarily engage without human intervention – precisely because current AI is not very good and can all too easily force situations to escalate with potentially terrifying consequences. So it is easy to concur that AI may pose a very real ‘existential threat’ to humanity without having to imagine that it will ever reach the level of superhuman intelligence.

We should be worried about AI, but for the opposite reasons given by Professor Hawking, he explained.

ALSO READ  The Reasons Behind The Rapid Growth Of Artificial Intelligence.

Bill Joy, co-founder and Chief Scientist of Sun Microsystems.

Maybe we’ll see the end coming long before it makes its way over. Except that by then, we’ll be too incompetent to survive even attempting to shut it down.What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually, a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control. People won’t be able to just turn the machines off because they will be so dependent on them that turning them off would amount to suicide.

Conclusion

Though the rise of AI could question our existence we could not forget the benefits it could provide us, if it is in the safe hands and used with precautions. Making could reach greater heights with the help artificial intelligence if handled properly.

Content submitted by :- Abhinash Dey

To submit contents fill this Form.

Debobrata Deb

Written by Debobrata Deb

He has always been interested in technology, especially knowing deeply about computers or gadgets since childhood. Gathering and sharing more and more knowledge relating to awesomeness is what he always like to do.