Artificial Intelligence - How Far Should We Go (Part III)?
Warning: Prepare to be uneasy. These images and ideas may make you feel eerie or uncomfortable...
The potential benefits of AI for humanity are seemingly limitless. With the abundance of amazing possibilities for corporate advantage, government security and stabilization, complex large scale problem solving, and a better quality of life for all with our intelligent "things" serving us, it is understandable why we might go full steam ahead to establish ourselves in AI. It would not be unreasonable to conclude that if we continue to make AI increasingly powerful and more human-like, that it would be enabled to do more for us, be more entertaining and fulfilling, and integrate better into our lives. Consistent with this idea, we could give it a physical form (e.g. a humanoid robot), neural-like connections similar to ours, and make it look and act as human as possible so we can fully interact with it. Yet, what about our other human dimensions such as emotions, memories, self-awareness, identity and spirituality? Do we actually want our artificial intelligences developing a personhood ? Or, is this going way too far?
There are two existential dangers in the uncertain equation of humans and AI that are often debated heavily by those involved in science, technology and politics. Danger #1 Our AI has the power to amplify the most negative aspects of our humanity, and lead to even greater oppression or terrorism of other human beings. Danger #2 The AI potentially develops a will of its own which does not align with humanity's values, can not be controlled by us, and eventually acts in a way that causes our extinction. The so called pandora's box related to the latter issue, is also the question of the machine having free-will and consciousness and how we should respond to it. If these concerns still seem to be over-hyped scare tactics or an imaginary threat, lets explore these concepts in detail with examples and actions from the leaders.
Danger #1
The danger of amplifying our humanity through AI is self evident because at our worst we are cruel to each other and we try to control others for our own benefit. AI could be used to amplify our prejudices, ill-intentions and greed. Imagine that the AI operating behind ordinary websites can predict the gender, race, age, or sexual orientation of a person browsing online. It can be used to deny access to high paying jobs, or certain types of job openings, schools, real estate, or investment opportunities, if the algorithm determines you are among the pool of unworthy candidates, based on your predicted demographics. Seem far fetched? In fact, this is already happening in AI search and advertising algorithms used by Google, and is a problem they report to be aware of and actively addressing. Studies have shown the algorithms may be racist, pulling arrest records on ethnic names during a routine search, assigning negative value to certain demographic features, and systematically offering women lower-paying jobs than men. These occurrences are believed to be due to unconscious thought patterns and biases being extrapolated and surfacing in the programs. This is bad enough, however, what is to prevent some individuals or organizations from intentionally using this capability to redline others, then just blaming the algorithms if it is publicized or legally challenged? One could even go further and make it harder to detect by using additional algorithms to conceal the bias. In this case, the internet is no longer open but controlled by AI.
Another area of concern, which becomes more relevant as we modernize our relationship with technology, is the cyber-attack. Imagine, not only your digital identity being stolen, or your bank accounts being hacked, but also your car, the connected devices or robots in your home, and your mobile devices suddenly being rendered useless, or weaponized against you, for ransom or other ends, in the same way hackers enter the home computer. Only this time, using artificial intelligence smarter than any human hacker and potentially invading the physical space around you. These cyber-attacks could be used to terrorize individuals, or cripple institutions, hospitals and government functions, if they are carried out by AI.
Finally the use of AI for building weapons which can directly target individuals and groups is another terrifying possibility. Russia's Kalashnikov has created an autonomous killing machine which can make its own decisions and identify targets to assassinate. These types of weaponry are acting without human restraint and making decisions to kill based on computer programs. Other forms of militarized use of AI could potentially be developed in the future with disastrous consequences. Over 3000 robotics and AI researchers, and 18,900 others including famous theoretical physicist Stephen Hawking, Elon Musk of Space X and Steve Wozniak of Apple have signed on to an open letter to the U.N., via the Future of Life Institute in July 2015, demanding a ban on the use of AI for weapons.
All of the previous dangers pertain to the use of AI by humans to harm other humans, but what about AI operating completely on its own, as it is eventually expected to do? This brings about the philosophical questions and debates between scientists, tech leaders and engineers about the autonomous nature of AI. In its current iteration, artificial intelligences are data algorithms, or neural network based models of select portions of our intelligence, which can make predictions or solve problems. However, based on the current rate of progress, many believe it will rapidly evolve to a more general form of intelligence. Competition and concern of threats of domination by another is one force creating momentum towards a sort of intellectual "arms race" to build ever more advanced AI. Eventually, that general intelligence is expected to reach and exceed the population of humanity, by 2029 and 2045 respectively, as predicted by Ray Kurzweil, a futurist and Google engineer. Of course this could all turn out to be wrong and we reach a plateau in the level of AI we can create, based on some limiting factor we have not yet discovered. It would preclude us having to worry about these seemingly far off dangers. But we can't wait around to see what happens, by then it would be too late. Additionally, any limitations on the growth of AI will also limit the benefits we can derive from it.
Danger#2
Yet, supposing we do succeed in creating artificial general intelligence (AGI), we would eventually have machines more intelligent than any life form on Earth, having the capacity to manipulate the real world. Can we even begin to predict what the machines will do with that power? Will they function as a programmed by humanity in a negative or positive way, or do they start to determine their own rules, unlike anything we have ever considered? Do you want a machine which can not feel, have emotions, or worry about consequences having the power and ability to impact life? Its actions would be based on the goals determined by its circuitry of which humanity would play an unclear role after a while. Some believe we should pre-program the machines with morality, ethics, respect for human life, human equality, and possibly the capacity for emotions such as compassion. However an argument can also be made that it is these soft intellectual inputs that make life and decisions messy and subject to poor judgement or corruption. Why should a machine "care" if it functions or doesn't function, or if it can control things in the real world, or have power over others, if there is no value associated with this ability from a machines perspective? A machine doesn't have an ego, love, anger, fear or sadness, a desire or capacity to experience physical or intellectual pleasure, a sense of responsibility, or even a need to find meaning in life; these drives are born out of our human instincts, and problems faced by being transient living beings. As we know, these very same human impulses most often drive the conflicts and bad behavior among us. By contrast, the machine would only have the purpose it was pre-programmed to have to direct its future intelligent behavior, unless it begins to "think" it must act according to simulated emotions, impulses, memories, etc.
So this brings us to the ethical hornet's nest of self-hood and consciousness in non-living intelligent machines. Is it possible that they will spontaneously develop the above mentioned qualities as a by-product of their intelligence, and start to act in their "own" self-interest? Or do we go as far as to pre-program them to have these human dimensions, give them some derivative of rights as citizens (i.e Hanson's Robotics Sophia), then hope they will "play-nice" with living beings afterwards? Won't a machine be more likely to act in a way harmful to humans if we define it as a separate being with its own motivations via simulated human impulses, rather than define it as a machine with a purpose to project human thought? Even if we pre-program a machine with reason to "think" it is conscious and design electro-chemical systems associated with consciousness, such as a neural reward systems or pain systems, we still won't know for certain if we have succeeded in achieving consciousness or just created a great mimicry. Nor will the computer necessarily know if it has become conscious, since it is using intellect to decipher a concept which can only be known through experience. Consciousness is a subjective phenomenon of living beings; one may argue that it is unlikely that consciousness can be achieved by building a rudimentary imitation of a neural network. However, if the machine "thinks" it is conscious and learns to behave as if conscious we may be dealing with an equivalent situation since it could be indistinguishable in outcome. Subsequently, we would be morally obligated to treat such a machine as if it has the capacity to suffer like a true living being, and determine what kinds of rights it should have. We could not entirely predict whether the machine with actual or simulated consciousness would continue to serve humanity as we ultimately created it to do. An intelligent machine with a concept of self-hood, could determine that it should act in the interest of freedom, in imitation of true living beings, hence becoming virtually meaningless to humanity. We could no longer expect it to continue doing the job it was designed to do. Furthermore, we could argue that If a machine has a personhood concept it will have self-preservation as a goal, then it will always continue to try to solve that problem, even if it has no true meaning to its existence. A useless robot governed by AGI which is unable to be controlled, intent on existing for its own reason, is like a zombie, animated by a set of principles but no one truly upstairs to give real meaning and purpose. Its akin to "summoning the demon" as Elon Musk put it during a conference to discuss the potential dangers of AGI. And these issues don't even begin to address the complexities of human social interactions with these human-like machines; currently falling in the so-called "uncanny valley" territory.
These AI concerns are alarming and have no clear right or wrong answers, but many have weighed in to address different aspects of these issues, before they come to actualization. Some believe artificial intelligence should physically become part of us, in order to solidify its purpose and connection with humanity. Companies such as Neuralink are working to develop this brain-machine interface. This would allow us, in fact, to become super-human or a more evolved human, possessing all of our human dimensions, along with the technical power and intelligence enhancements that machines can offer. This would prevent the autonomous "personhood" of AGI in the future, or at least, allow us the power to address it. Other ideas to address the dangers of AI include the use of universal rules and regulations and open access or democratization of artificial intelligence, as previously mentioned in Part I of this article. By making the intelligence accessible to all, no single entity or group has the unsurpassable advantage of intellectual power, and artificial general intelligence applications would be developed within rules and guidelines, set forth by a diverse and multi-national community, with many important values being considered. Perhaps we could also endeavor to maintain a strict minimalist approach when developing each application of artificial intelligence. As a rule, we could pre-program intelligence with the least amount of human characteristics "under the hood" necessary to fulfill its particular function and allow for safety. Allowing mimicry of human interaction, for entertainment and ease of communication makes sense, however creating an internal network for actual digital emotions, and motivations seems to lead down a slippery slope. We are moving into uncharted territory, with no evidence of slowing down, and will require as much thought, discussion, collaboration and caution as possible to analyze the impact of AI, and move towards an outcome we can all benefit from. Although they offer no guarantees, these approaches can potentially lay the groundwork for safe and effective implementation of artificial intelligence. We'll only truly know its impact as we witness it unfolding over time and it may be sooner than we think. Are you watching? Better yet, are you letting your human voices be heard?...
If you were here from beginning to end, kudos on your dedication, thanks for following this blog series! If not, please check out Part I and Part II or view our other blog posts. Feel free to leave your comments on the website or reach us on Twitter...