From self-driving cars to Siri and the Roomba, Artificial Intelligence (AI) is penetrating every facet of our daily lives. AI is no longer portrayed as a robot with humanoid characteristics; instead, it can encompass anything from search engine algorithms to autonomous weapons.
Technically, the AI that is used in Siri, the Roomba and all other products and software that utilize this type of intelligence is termed narrow or weak AI as it is designed to perform only a singular or narrow task. However, the long-term goal of many developers is to create products and services that utilize general or strong AI that would be able to outperform humans at just about all cognitive tasks. While this may sound positive, there is a need to consider the safety of relying on technology, Artificial Intelligence and robotics to fulfil a number of functions, and AI safety and its risks have come under the spotlight recently.
Staying in control of AI
In the short term, the goal of making AI beneficial to society is a good one and motivates research in numerous areas such as economics and law, and challenges those debating topics such as validity, verification, safety, security and control. While having your laptop hacked or crash may be somewhat of a nuisance and an inconvenience, the same thing happening to a self-driving car, a pacemaker or a power grid could prove fatal to a large number of people. There’s also the potential for disaster as the race for lethal autonomous weapons heats up, and the world powers recognize the potential that Artificial Intelligence has for causing global havoc.
Long term, the quest for general or strong AI may succeed, but there’s a need to ensure that super intelligent systems do not take control, and that the goals of AI are aligned with ours before they take over. New technologies could help us eradicate poverty, hunger, drought and war, but if Artificial Intelligence takes control, or those who control it use it for nefarious purposes, the risks of this type of technology could well outweigh the benefits.
How AI can become dangerous
Researchers generally agree that even strong AI that is super intelligent will be unlikely to exhibit emotions such as love, hate or compassion. Although a common Sci-Fi movie theme that leaves many wondering ‘what if’, the chances of AI becoming intentionally malevolent or benevolent are beyond the scope of reality. The human chemical makeup that allows us to feel happy or sad is simply not present in AI, and fabricating emotion is almost impossible. However, when it comes down to it, Artificial Intelligence can be dangerous and experts highlight the 2 following reasons why:
Programs set for devastation: part of the problem with AI is that humans have to create it. In the case of something like autonomous weapons this can be a very scary reality, as someone with ill intent has the power to cause mass devastation. A number of respected professionals have called for a ban on Artificial Intelligence autonomous weapons and their cause for concern is certainly just.
In the hands of the wrong person an Artificial Intelligence weapon could be used to the worst degree and an AI arms race could lead to an AI war. Drones have already been used successfully, and this is just the start. An even bigger issue arises in that AI weapons would be incredibly difficult to disarm in order to thwart the enemy, and thus humans could easily lose control of the situation. Even with narrow AI this is a risk, but as strong AI tech develops so too do the levels of intelligence and autonomy.
Achieving a goal by any means necessary: the second biggest risk is that Artificial Intelligence may be programmed to be beneficial, but that it uses a destructive methodology to obtain its eventual goal. If AI doesn’t have goals that are fully aligned with those who have a strong ethic and moral compass it may perform its task at any cost. For example, if you instruct a self driving car to drive you somewhere as fast as possible, it may do so, disregarding all rules of the road and putting your and others safety at risk. Alternatively, if a super intelligent system is tasked with completing a geo-engineering project it may destroy a sensitive eco system simply to achieve its aim.
These examples illustrate clearly how dangerous Artificial Intelligence can be. Not because it is malevolent or has ill intent, but because humans ultimately control it, it has no conscience and its competence is gauged by result, not how it achieves the result. AI can be incredible at achieving a goal, but if the goals are not aligned with ours there is an immediate problem.
Increased interest in AI safety
Elon Musk, Stephen Hawking and Bill Gates are just a few of many who have expressed concern about AI recently, and other leading science and technology figureheads have also weighed in on the debate. Part of the reason that AI is now such a hot topic is due to the multitude of technological advancements made in recent years. Previously, AI was considered to be the work of science fiction, and not something that would ever really become a reality. Experts were convinced that major Artificial Intelligence milestones were decades away, but they have now been reached and the possibility of super intelligence in our lifetime is highly feasible.
AI research conducted at a conference on AI Safety in Puerto Rico in 2015 shows that that human level AI could come about as early as 2060, and that if this is the case, starting to discuss and implement safety measures now is essential.
A smarter breed
Essentially, Artificial Intelligence has the potential to become far more intelligent than any human being and there is no guaranteed way that we can predict how it will eventually behave. Past technological developments cannot be used as a benchmark, as technology is evolving at a rapid pace and the chances are, AI could well outsmart us in the future. Although AI is created by human hands, it has the potential to retain far more information and to be able to execute manoeuvres far more efficiently.
Technically, we are in control of AI’s evolution, and at present we are the smartest beings on the planet. However, Artificial Intelligence could push us out of this top spot, and if this happens we could lose control. Experts have pointed out that AI can be harnessed for good and that our civilisation will flourish if this is the case. But, we need to stay ahead in the race between advancing tech, and use wisdom and other human characteristics that we are blessed with in order to manage it. Artificial Intelligence technology can do a whole lot of good, but to stay ahead in the race without impeding progress we need to ensure that AI safety research is conducted and implemented at every step of the way.
Robots that seek revenge, weapons that wipe out whole countries and machines that take our jobs are just some of the many fears society seem to have surrounding AI. The media has been relatively good at scaremongering, and many fear that the rise of AI will affect society negatively.
However, we are already very reliant on AI in so many ways, and all this benefit us dramatically. Siri is just one example of how useful Artificial Intelligence can be, and although the unknown can be risky- and that’s exactly what the future of AI is- we can coexist with this incredible technology and harness it for the good of mankind, as the risk are somewhat transparent and should be easy to navigate.
Find more top mobile app development companies worldwide on AppFutura.