Tuesday, November 17, 2015

Fear of the Unknown: Artificial Super-intelligence and the Road Ahead

Last week I had to write an essay for my composition class exploring the conflict between technology and ourselves, the human race. In preparation for this essay we watched the miniseries of Battlestar Galactica, the episode "Who Mourns for Adonis?" from Star Trek: The Original Series, and Blade Runner. We also read several essays exploring the topics behind those episodes and that film and we read Do Androids Dream of Electric Sheep?

Now, all of these, with the exception of some of the essays, were made in the sixties, seventies, and early eighties. This was the advent of new technologies impacted the mass market and I can only assume that older people saw it as an invasion. For that reason I would expect movies and TV shows with antagonists that are the "evil AI" type. but I was astounded when we discussed our essays in class, to learn that most of my classmates still felt wary of technology getting so advanced that it becomes sentient.

We are a generation that had grown up inundated with technology, getting smarter and more advanced every year. Although classic sci-fi has the us versus them mentality of humanity at war with the machine, that to has been evolving. Now not all A.I. are automatically the evil antagonist, and when they are, there is usually a counter balance with a benevolent, or at least benign, A.I. also in existence. For example, in Avengers: Age of Ultron Ultron is the evil A.I. determined to erase humanity, but another A.I., Vision, is created to fight him on behalf of earth.

Even in 1982 Blade Runner the concept of the book it was based on is changed to reflect a loss of humanity within the human race itself, while humanizing the replicants. In fact, in most sci-fi media where there is an evil or hostile machine or machines, it is the result of human failing, such as continuing to treat a sentient race as slaves or tools, giving them an objective to achieve at the expense of all else, giving a singular objective (e.g. a defense network- Skynet), or trying to destroy them after their use has run out.

All of my classmates wrote about things like how our lazy use of technology is absurd, how further technology immersion is dangerous to "authentic" human relationships, and how it is vital to preserve of moral code in the face of technological advancement. I saw things like Facebook, email, and texting thrown out there as distancing people from each other. As someone who has spent a solid year on the opposite side of the ocean from my family, somewhere where a letter takes three weeks to arrive, that all of those things were the only way for me to stay close to my family.

You know how they say that writing is closer to your thoughts than your speech? I believe the same holds true for digital forms of communication. I communicate more with my brother now via texting hundreds of miles apart, than I did with my voice when we lived in the same house. My relationship with my mother has actually improved since we stopped talking face to face everyday. I am better friends with people when they are able to contact me quickly to share funny anecdotes or pictures.  I ask: how would long distance family ties remain as strong, if we did not have such access to instant communication?

One essay talking about how technology has begun dictating what we should do, something, it says, was previously only done by gods. Now, if you're skeptical of the existence of divine entities or even have an open mind regardless, it could be said that we have moved from one man made construction to another. If you think about it, that says that we have always needed something to tell us what to do. We are a race of sheep.

Think about this for a second, some day far in the future we meet an alien race somewhere out there, or they come to us, however, they are so profoundly different from us that you might doubt sentience when you look at it. How would the human race react then if we fear the actualization of self in something that we ourselves create? Even worse, if we have the advanced technology to built sentient androids, but refuse to acknowledge their rights as intelligent, conscious, self-aware beings just  because we made them to serve us, how would the aliens judge us then?

I think Captain Picard said it best is "The Measure of a Man" when talking about Commander Data's rights as a sentient being under the laws of the Federation,

"Now, the decision you reach here today will determine how we will regard this... creation of our genius. It will reveal the kind of a people we are, what he is destined to be; it will reach far beyond this courtroom and this... one android. It could significantly redefine the boundaries of personal liberty and freedom - expanding them for some... savagely curtailing them for others. Are you prepared to condemn him and all who come after him, to servitude and slavery? Your Honor, Starfleet was founded to seek out new life; well, there it sits! - Waiting. "

No comments:

Post a Comment