I am a software developer at Hacware, where I currently am working with a team of software engineers to build a machine-learning driven cybersecurity program.
Earlier this year, Hacware started Engineer Takeovers, where employees could share their unique perspectives of “Life at Hacware” by running the company’s social media. This allows us to provide value to the business and engineering communities through the lens of our work at Hacware. The first engineer takeover was our newest hire Trenton Goins, a recent CS graduate. Next was the mysterious “Hacker Fish”, a white-hat hacker in Hacware’s employ.
As the months passed I nervously wracked my brain as to how I would provide value to the developer community once my turn came. Yet, try as I might, the most interesting thing I could come up with would be to post a bunch of memes related to our products.
Of course, aside from entertainment, I found it hard to justify these sorts of posts as a way to provide any sort of value to the business or engineering communities. After all, what did memes have anything to do with what I do at Hacware?
Until one night when I was telling my wife Lucia about my problem, she said to me “But aren’t memes already like what you’re working on?”
“What do you mean?” I asked.
I hope I’m not the only one whose first reaction was to balk at that suggestion. But the next day as I tried to figure out a clever way to relate the Spider-Man pointing meme to Hacware’s new anti-phishing software, I realized she was right. By creating memes in order to share them with the tech community, I would unconsciously be enacting the very same philosophies and principles that make machine learning what it is.
What is a Meme?
In the common vernacular, the word “memes” refers to humorous combinations of images and text that are shared over the internet. But long before it was used to describe these sorts of jokes, “meme” was coined by Richard Dawkins in his 1978 book The Selfish Gene. In the book, he postulated that ideas and culture could spread, mutate, and evolve similarly to the spread of traits in evolutionary biology. Just as certain traits were better adapted to survival and would multiply while others would die off, ideas and culture that were easier to spread would propagate, while those less suited would diminish or vanish.
In the early 2000s, many realized that there was no better example of this concept than the spread and evolution of inside jokes on the internet. Jokes that would start in one form, and in the process of being shared and reshared, could completely transform into something else entirely unrecognizable from its original state.
But How Are Memes and Machine Learning Even Comparable?
Genetic memory is the concept that living creatures possess memories at birth independent of their own sensory experience that can be (or have already been) incorporated into the genome over time. Trauma, phobias, and the very ability to comprehend language are considered products of genetic memory in humans. In the broadest sense, genetic memory has also been used to refer to animals’ ability to pass down instincts to their descendants.
When using machine learning, we’re basically teaching a computer how to respond to a massive amount of stimuli. A wide range of attributes that might or might not seem related at all, but that the computer must learn to find connections between and draw conclusions about.
And there is no denying that machine learning is a brilliant tool. It allows computers to find patterns in ways the human brain could never understand, and far more quickly than our brains are even capable of. The information is recorded and passed down through multiple generations until the dominant response reveals itself, similarly to how a trained response can be passed through genetic memory.
Yet when Stanford researchers explored how to teach a machine to make its own memes, while the results did end up mostly coherent, they were still rather simplistic.
Meanwhile, compare the following four-part meme I pitched to my boss and coworkers.
Quite esoteric. And as expected, responses ranged from uproarious laughter to utter confusion. That’s because this meme was built on so many layers that even somebody with in-depth knowledge of machine learning and the Star Wars prequels might not understand. In fact, these four image macros were a collaborative effort between Lucia and myself, because her prequel meme-savvy far outstrips my own.
As mentioned, both our brains and machine learning programs try to make connections between seemingly random stimulus, and images like the above are the ultimate test of that. The above meme alone works off of information from multiple movies, preexisting memes and their expected punchlines, as well as at least a cursory understanding of how machine learning works.
There is far too much there for the Stanford A.I. to understand at its current level. The problem is that while machine learning algorithms learn based on many generations of inputs, memes also evolve over multiple generations of incarnations, and often in difficult to predict ways.
Our ability to understand and enjoy memes comes from our knowledge of so many disparate topics, often on an unconscious level. The phrases “Wait a minute. How did this happen? We’re smarter than this” or “I don’t think the system works” are completely innocuous, yet for reasons we may not even fully understand, they’ve become punchlines. Those lines have been used funny ways so many times that the quotes themselves have become funny. And even if we don’t consciously know why, we remember it. With a sort of “memetic memory”, if you will.
How Did We Get Here?
This iteration on the General Kenobi meme is a lot simpler to understand. One doesn’t need to know the original scene or the joke in order to get it.
This one, however, requires at least a basic understanding of the Star Wars prequels, and the fact that Ewan McGregor played both Christopher Robin and Obi-Wan Kenobi.
Until eventually after enough iterations, the very image itself becomes a punchline. Try explaining this picture out of context to somebody, let alone a computer.
Why Does It Matter?
People often talk about machine learning — or A.I. in general — as a sort of dark magic. Undeniably powerful, but with an underlying danger: That machines are so smart that they will soon outstrip human intelligence and take over the world. Even Elon Musk has claimed that “A.I. is far more dangerous than nukes” and that “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger … I do think we need to be very careful about the advancement of AI.” We as a society have this bizarre fascination with the concept of artificial intelligence becoming too smart to be controlled.
But if computers really are so smart, why can’t they figure out when I say “hello there”, I expect them to respond with “General Kenobi”?
The answer is that there is still something missing. Machine learning may be able to make connections our brains can’t fathom, just as a calculator can perform mathematical equations we cannot, but there is a threshold of complexity, of imagination, that computers haven’t been able to crack. For while we have been able to program massively complex systems of logic into a computer, creativity is another crucial piece of our skills, and something that computers have never been able to figure out.
Yes, the Stanford experiment shows that steps are being made so that machine intelligence can one day replicate surreal internet jokes, but even then, the key word there is replicate. The above examples show a mishmash of quotes, images, and concepts that are already considered memes. Until we learn how to imbue a machine with the sheer creativity inherent in all human beings, they won’t be able to create new memes like we can, and we won’t have to worry about welcoming our robot meme overlords.
And that’s okay. Just because we’ve yet to create a machine learning algorithm that can out-create a human being doesn’t mean that the advances in machine intelligence are any less impressive. Machine learning is an incredible tool that still sometimes feels like magic. But that’s not something to be afraid of. Because like memes, it wouldn’t exist without the continued efforts of countless individuals. And like memes, it holds a mirror up to the human condition in unexpected ways.
When I came to my wife with these conclusions, feeling rather bright for making the connections, she just smiled wryly and said. “I think the system works.”
Looks like I still have a lot to learn.