Fathom Mag
Article

Artificial Domination

Machines learn language, plunging the world into post-apocalyptic terror. Or will they?

Published on:
January 9, 2017
Read time:
5 min.
Share this article:

A friend of mine the other day was having trouble scheduling a meeting. He kept wasting time emailing back and forth about a good meeting time with his client. The two would email each other over several days just trying to figure out when they were free. “There’s a bot for that,” I said.

I showed him x.ai, an artificially intelligent assistant that will email someone for you, syncing your calendar and all, to get your meetings scheduled. A mixture of fear and wonder overtook him.

This generation will see huge swaths of the workforce dominated by robots, both hardware and software, but that shouldn’t be worrying.

“Haven’t they ever seen Terminator?” he said. My mom said the same thing when I told her about it. Then my brother. Then I showed my dad Google Home—I thought maybe he could speed up his productivity around the office—and he said something similar. I can’t remember what they said exactly, but I do know this: I’m far less skeptical.

There’s a strange fear overhanging computers and their ability to do things better than we can. Of course, it’s true what I just said. They can do some things better than we can.

The Great A.I. Awakening

A few weeks ago The New York Times published a long piece called “The Great A.I. Awakening.” That might be overstated, but not by much. This generation will see huge swaths of the workforce dominated by robots, both hardware and software, but that shouldn’t be worrying.

Google overhauled their Translate system over the summer. You probably didn’t notice, but their translations surpassed in a few months what the old system did in its entire lifetime. Why is this a big deal?

First, let’s talk about language and how bad machines suck at it. Have you ever wanted to throw Siri through a wall for misunderstanding “Give me directions to Union City Mall” and then looking up a recipe for an onion cheeseball? Of course you have.

Two Ways to Learn Language

There are two ways to learn language. The first, older, and less efficient way, is to cram dictionary definitions into a “mind” of sorts and let it learn that way. At Google’s scale, that’s millions and millions of words.

By the way, that’s also how hackers teach their computers to crack passwords: They cram them full of every electronic book available on the internet—a lot of books—including dictionaries and stored records of major governments, and then tell it to guess your password at billions of guesses per second. If you have a password that’s just a few words you remember, you should think about switching.

The Amazon Echo Dot brings a virutal assistant into your home for as little as $49.99.

The problem with this way of learning language is that that’s not how we learn language at all. A toddler doesn’t sit in front a stack of books before we speak to her. We learn by context and conversation; we learn by movies and jokes and innuendo. That’s why the internet gave us things like Urban Dictionary and Wordnik. From them we learn the language of the people, so to speak. Meaning is more fluid than what dictionaries say it is, which is why dictionaries are updated every few years.

The second way a computer learns is from the ground up, from raw data. Rather than learning the dictionary definitions and spitting out a translation, the new Google system learned from its users and the way they use language, and mimicked the way they were defining terms. Urban Dictionary, for example, is a far superior dictionary for learning certain kinds of language, slang and metaphor and so on. In this way Google is truly learning, not just computing.

A math student who can take answers from a teacher and regurgitate them on a page is one thing, but a student who can generate her own answers—that’s where the fear is.

The Revolution

But why is this way revolutionary for AI? In short, Google is learning from its own data, making more, faster, and more accurate discoveries about humans. All of this led Sundar Pichai, Google’s CEO, to announce that from now on Google will be “A.I. first.”

So, machines can learn language, a far more complicated feat than math or boolean logic. Because they can do this faster than ever before, they’re starting to take over white-collar jobs.

It was feared for a long time that computers would overtake jobs like assembly line workers, phone operators, and, more recently, truck drivers, but it seems that they’re taking jobs that require still more education. More education means more pay. So, more white-collar job cutting means saving more money.

Last month the Japanese company Fukoku Mutual Life Insurance announced they were cutting thirty-four insurance claims workers, replacing them with IBM’s Watson system, a fancier version of Siri. (You can even read the company press release with Google Translate, where one machine translates the new job titles of other machines.)

This means that millions and millions of jobs are now in jeopardy of being stolen by machines, even highly-educated jobs like paralegals and web designers.

Of course, when computers take over everything, there is a serious problem. There is a loss of humanity.

A Loss of Humanity

A hundred years ago when you needed an insurance adjuster to look at something, you’d go into town and find Mr. Thomas Woodson at Woodson Insurance. You’d talk with him about your problem, he’d come out to the farm and take a look at it himself, trying to see the most common sense approach to getting it resolved.

Just recently my dad tried to do this, but with a machine. The computer had no record of our address and the humans running it had no say in how it got handled. They were phone operators in India, relayed to my dad’s phone over a global internet network, who just saw numbers on a screen. There was no one to come and look at the issue, and it took weeks to resolve it.

Turns out that when the company had switched over their computer system and all its info, someone had typed 1110 Scenic Highway rather than 1111 Scenic Highway. It was a typo. Is this computer error or human error, or a bad combination of both?

AI forces us to think more critically about our role as stewards and creators of society.

This is an oversimplification of the story, of course, but undoubtedly you’ve experienced a similar thing with packages, cable bills, or any number of idiotic things we have to do in the modern world.

Is it a bad thing?

I said earlier that this isn’t a bad thing. We don’t have to jump to visions of a T-800 kicking in our door with a sawed-off shotgun and a leather jacket.

Immediately, it forces us to ask the question, “What does it mean to be human?” It forces us to think more critically about our role as stewards and creators of society, how we arrived at this place and where we’re going from here.

Humans are better than machines at a million things, important facets of human culture like cooking, justice, beauty, and compassion. Machines cannot do these things, but they can speed up processes we’re already wasting too much time on, freeing us up from the labors of the ordinary.

Or maybe that’s the very thing that makes us human—ordinary, mundane tasks that slow us down so we can take in what’s really beautiful in life. If only John Connor were here to help us figure out what that is.

Cover image by Dick Thomas Johnson.

Brandon Giella
Brandon is the content editor for Fathom, serving as its copy editor. He also serves as a content developer for The Starr Conspiracy, a full-service digital agency in Fort Worth, TX. You can find him on Twitter and LinkedIn.

Next story