How To On Process

Questioning the Trust Fall of Artificial Intelligence | Joseph Stanton

Machine learning is defined as artificial intelligence that provides systems the ability to automatically learn and improve based on past experience rather than explicit programming. It is most notorious for being used for “intelligent” chat-bots and self-crashing cars. This technology has the possibility of either ending human conflict and world hunger, thus ushering in a utopia, or opening Pandora’s box for the human race, leading to a robot uprising resulting in mass extinction of life on Earth. Today I’m going to try it out on my classmates.

Artificial intelligence, or AI, is defined as intelligence demonstrated by machines, rather than by humans and other animals. The birth of artificial intelligence can be traced back to Dartmouth College, where in 1956, students and professors produced programs capable of playing chess. Back then, scientists, defense contractors, and hardcore math geeks were the only ones playing with computers. The computer was commonly thought of as the thread connecting machines and human thought.

Despite the personal computer being on the cusp of a cultural explosion, AI research experienced a lull entering the 1970s. The general public’s expectations faltered and investors failed to see the light at the end of the tunnel. Ambitious projects attempted and failed to translate language through machines. Funding for DARPA (Defense Advanced Research Projects Agency), AI research ended in 1969, making the industry enter an AI winter. 

Nowadays, the world is eager to exploit artificial intelligence for everything from robotic surgery to grocery shopping. And with the world awaiting an AI revolution with open arms, people have been jumping on the bandwagon. But some are skeptical.

amazon-echo-featuredElon Musk called AI “more dangerous than nukes.” One might not see the danger while interacting with Amazon’s Alexa. Anything more complicated than a simple command being met with “Sorry, I don’t know that,” is hardly alarming. But what Musk and others fear is a runaway situation. In the case of general AI, artificial intelligence is capable of completing any human task, such as writing code. Once it does this, it can improve on its own code, making itself smarter. This cycle repeats faster and faster, soon surpassing any human being’s intelligence. Then any collection of human intelligence. Then the collective human intelligence. And this can all happen ridiculously fast, making humans unable to halt the wave of total AI control, like an ant trying to stop a freight train.

Dr. Zdravko Markov, Computer Science professor at CCSU, isn’t convinced. “I don’t think this will happen soon, or maybe at all, because what AI does now, just modeling some small aspects of thinking in general, it may do it much better than humans, like image recognition, speech.”

Yet, as Markov says, while artificial intelligence can do some tasks better than humans, like IBM’s Watson playing Jeopardy, it’s not yet to the point of proper natural intelligence. “Human thinking is all of these things together, and it’s also other things; working in a complex environment. So it’s a really complex phenomenon to model.” Even though Moore’s law shows that computing power has increased by a factor of two every year since the days of vacuum tube computing, it’s hard to fool someone with an artificial intelligence. Huge server farms are required for even barely convincing AI.

AINot all are convinced that, even if an artificial intelligence took over, this will be the end of humanity. It’s possible that a utopia, rather than a dystopia, will emerge in the wake of the AI singularity. If AI were to only aid humanity rather than subvert it, then it could become a godlike, or at least paternal, entity. AI could alleviate crises that plague modern man, such as illness, war, and famine. Even basic minutiae such as boredom and trivial loneliness could be in the past, a concept explored in the 2013 movie, Her. One could imagine the appeal of having an infinitely intelligent friend to spend time with.

Something else being worked on by AI’s biggest skeptic could make it all even more abstract. Musk is working on a device to directly interact with the human brain, being able to read thoughts and feelings, and also stimulate them. This could cause the human collective and AI to truly act as one interconnected entity, capable of sharing the consciousness of others through a much more efficient channel than language. Acting as one consciousness, humans and AI could do great things.

Well, now that we have all that exposition out of the way, let’s get down to the fun stuff; using artificial intelligence on my classmates. Namely, making a Frankenstein’s monster out of Blue Muse Magazine.

The software that I decided to use for this project is available for free online. The problem is that it requires powerful hardware to run. The crucial parts of this project were a dedicated Nvidia video card, or cards if you have a need for speed and the Ubuntu operating system. These are the primary requirements for running the software that I chose in the fashion that I desire. One could possibly run these programs without these things, but this is uncharted territory.

If you are having trouble, Google is your friend, as you are not the first.

A set of images based on a common theme is the best training data. “Faces of my classmates” is the theme in my training, pulled straight from our “About” page. This should allow the software to learn what a human face looks like by applying filters to the image and determining commonalities. After enough time has passed, we should have an amalgamate of my fellow classmates and myself.

And with that, we are ready to train hyperGAN.

Well, okay then. HyperGAN is mesmerizing, but I was not expecting this. Hmm.

The training data might be too small, 13 people is a tiny dataset. Or possibly, the GAN accidentally trained on how “Derek” all of my classmates are. And Derek happens to be the most “Derek.”

Or I might be misusing the program. Even though I have everything installed correctly, the documentation for hyperGAN is hardly comprehensive. Exploring other machine learning software might be fruitful as well; there is tons of software being written.

And that’s one of the great things about AI and machine learning at this moment; it’s all so accessible. Whether we become slaves to a robot overlord or frolic in an AI utopia, everyone should be keeping an eye on AI.

Companies like OpenAI are busting down the doorways to entry into this field, and for good reason. They share the fears of Musk, and others, about AI taking over, and are looking to prevent this through software tools. Tools that are open source, letting schmucks like me poke around in other people’s expensive research projects.

You can rest assured that machines that reliably think for themselves are in our future, possibly closer than we think. Should you think your self-driving car plans on killing you, thanks to companies like OpenAI, you and anyone else can pop the hood and check out the code yourself.


Here is what I call “Joe’s quick-and-dirty AI guide.” This should let you play with the software that I used in my research:

A powerful computer running Ubuntu 16.04 is required to train the image set in a reasonable amount of time. For this, we need a dedicated Nvidia GPU, like a GTX or Quadro. Otherwise, one could run the Tensorflow software with the CPU, but this takes a long time. CUDA and cuDNN are both dependencies for Tensorflow if using a GPU. Numpy is a library for scientific calculation in python, and Pygame is required for the visual part of hyperGAN.

I realize this might look like word salad to the uninitiated, so here are some helpful links in order of installation:

Ubuntu 16.04 –*

Nvidia CUDA Toolkit 9.0 –*

Nvidia cuDNN 7.0 –*

Tensorflow –

numpy –

pygame –

hyperGAN –

*Make sure to use the correct versions of the software listed above.

Blue Muse Magazine is a general interest literary magazine published by the students of the English Department at Central Connecticut State University in New Britain, Connecticut. We publish poetry, fiction, and a gamut of creative nonfiction on anything and everything the blue muse inspires us to write.

1 comment on “Questioning the Trust Fall of Artificial Intelligence | Joseph Stanton

  1. This article was very interesting and mind-blowing. Awesome work!

Leave a Reply

%d bloggers like this: