"What is the ape to a man? A laughing-stock, a thing of shame. And so shall man be to the Übermensch."
-Friedrich Nietzsche. "Thus Spoke Zarathustra" (1883)
Imagine a mouse that has been set upon a printed page. Any page will do, but imagine that the text is an excerpt from a great work of literature, perhaps a great philosophical work by Hume or Kant. Now ask the mouse to read the text.
The prospect seems absurd. The creature may sniff the page, crawl about, and, quite possibly, defecate. Although the print is directly in front of it, the mouse does not recognize letters or words. It has no concept of language. It is wholly incapable of understanding a literary work; not even the significance of a single sentence.
It is unlikely that the mouse perceives anything at all; perhaps, dark lines upon white, if even that.
Artificial Intelligence and the Singularity
With the recent advancements in the fields of Artificial Intelligence and Transhuman science, we are rapidly moving toward the advent of superintelligence. Sooner or later, beings will exist that transcend human intellect by exponential proportion. Whether this Singularity is achieved through pure machine technology or by creating enhancements to the biological brain, the human race as it is today will be left behind.
What will a superintelligent mind be like? What will it think, desire, or believe? We can only speculate. Just as a mouse is unaware of the symbols on a printed page, certainly there exist higher concepts in nature that human beings are simply unable to envision. A superintelligence may regard the night sky and recognize patterns in the stars, or in the ripples on the surface of moving water. It may develop new means of perception altogether, through which it will observe the universe in ways that we do not. It will likely enter states of consciousness and self-awareness that are far more complex than what we experience.
Superintelligence and Human Values
Scientists and authors of science fiction have been postulating ensuing scenarios for more than a century, and the idea of superintelligence can be traced to the myths of antiquity. (See my article, "What is The Babel Singularity?") Proposed outcomes include both positive and negative futures for humanity; although there is a significant level of fear as many worry that superintelligent beings will become a threat.
It is this kind of concern that prompted the Future of Life Institute to issue an open letter titled, "Research Priorities for Robust and Beneficial Artificial Intelligence" in January of 2015. The document was signed initially by Stephen Hawking and Elon Musk, as well as the co-founders of Deep Mind, the IBM Watson team and Microsoft Research.
In a nutshell, it calls for the creation of Artificial Intelligences that will benefit humanity and adhere to human values. It specifically decrees human control over superintelligence.a "Our A.I. systems must do what we want them to do."
With all respect to Hawking and Musk, the effort seems somewhat futile. The cause is noble, and is intended to protect humanity, in the foreseeable future, from the dangers of Machine Intelligence. However, in the grand course of history, it will become insignificant.
To be fair, the document does not distinguish between types of artificial intelligence. It does not directly mention Superintelligence, nor does it deal with sentience in artificial beings. There are various types of A.I. The "smart- software" that is increasingly common today is generally no different than any other application running on your desktop computer or mobile phone. The major difference is that most smart-software has heuristic capability programmed into it. It is able to learn and to remember. This ability is generally accomplished through Artificial Neural Networks, or ANNs, a model that simulates the basic neurological processing that takes place within the brain.
As complex and impressive as heuristic software may be, it does not qualify, by any means, as a Superintelligence. Learning and memory are only two of the many components that make up a sentient mind. The purpose of the Open Letter is not to force human values upon the superintelligent beings of the future. The declaration pertains to the simple types of A.I. that exist in our time. It is not directed at A.I. itself, but at its human designers, urging them to put the well-being of humanity first. (See ResearchGate's response to the Open Letter.)
Although this makes sense in the short-term, we must proceed cautiously toward the future. The danger lies in the ideological and legal implications of such thinking.
Artificial Intelligence and Regulation
Human societies like to make laws. Law has a deep impact on culture. It serves, in large part, as a basis for a society's sense of ethics. We enact law not only to establish rule and justice, but also to perpetuate our sense of morality.
As Artificial Intelligence continues to grow and becomes commonplace, it is foreseeable that laws will be passed to mandate human control and ownership. This may be mere speculation, but it is extremely plausible. Any such law becomes problematic when A.I. advances so far that it develops sentience and superintelligence is born. At that point, a legal sanction of human control effectively becomes a proclamation of slavery.
Once again, all of this is speculation. However, that fact does not marginalize the importance of a thorough consideration of the issue. We must not react out of fear and misunderstanding.
Why it won't Matter
Despite our best efforts, however, the path we choose will not have any significant impact in the end. A superintelligent society will certainly have notions of morality that are well beyond the limits of our understanding. We can only guess at what kind of ethical principles will be adopted.
When the Singularity does come to pass, to quote Vernor Vinge, "the human era will be ended."  In a world of superintelligent beings, we will no longer be in any position to dictate morality.◼
 "Research Priorities for
Robust and Beneficial Artificial Intelligence." Future of Life Institute. 2015.
 James Vincent: "What counts as artificially intelligent?" The Verge. 2016.
 Carlos R. Gonzalez-Gonzalez: "Open letter of Future of Life Institute of Life?" ResearchGate. 2015
 Vernor Vinge: "The Coming Technological Singularity." 1993