“Mmm…” my hand slid over to my phone, attempting to tap the “dismiss” tab as the alarm blared in my apartment room. After missing it twice, I sat up and glared at the phone.
My heart spiked.
“Five-twenty-five!” I shrieked, jumping into the air and sprinting over to the shower.
Yup. That’s how I began the first day of my internship: waking up ten minutes late. That morning was mostly a blur; I got a shower in record time, the first half freezing and the second half boiling, because I didn’t have the time to attempt to adjust the temperature just right. I ate a banana so fast I probably ate the peel with it, and got dressed like Clark Kent changing into Superman. I was down in my car in less than ten minutes. I felt gross about not brushing my teeth that morning, but I popped in a breath mint. I figured it looked better to show up on time with some plaque than late with a polished, awkward smile.
I was going for my master’s in robotic engineering at the time, a five-year student with a year left. I’d interned before, but nowhere too notable; I started by shadowing a professor who worked at the school’s own robotics wing when I was a sophomore going for my bachelor’s, and two years later, I shadowed at Beneke Electronics, a place most-known for its cutting edge advancements in forklifts. But after some of my professors went out of their way to get me connections, a much different opportunity arose during my fifth year of college: an internship at the Kramer Institution for the Advancement of Artificial Intelligence.
This was nothing to sneeze at. The Kramer Institution was America’s most funded, and most cloak and dagger, laboratory devoted to studying artificial intelligence in the country. And, since I was pretty vocal about my interest in AI, which is why I chose to major in robotic engineering in the first place, a professor of mine hooked me up. I’m not sure how or why he did this for me; I wasn’t the top of my class by any means, and didn’t bring much to the table. But according to him, my “passion for the science [of AI] was unparalleled.”
But enough backstory.
On the way to the Institute, I FaceTimed my fiancée of six years. Since I was studying in California and she was studying in Oklahoma, where we’re from, we decided to hold off the marriage until we could live together. And that wouldn’t be until both of us obtained our masters’.
“Wow, Chester,” she grinned at the sight of me, “you look, uh- rushed.”
“Yeah,” I chuckled, only glancing at her but mostly keeping my attention on the road. “I woke up late.”
“Today…?” she sighed. “Chester…”
“I know, I know.”
“Daddy!” my daughter Tessa’s voice rung into the phone.
We named her Tessa because it sounds like Tesla, after the great Nicola Tesla. Because we’re nerds.
“Tess, come on, you need to keep getting ready,” she turned and faced her. “You’re not dressed yet? Tessa.”
“Sorry, Mommy,” she was about to walk away, but then spun around and called, “Hi, Daddy!”
“Hey, sweetie!” I chimed. “Keep getting ready for Mommy!”
“Are you going to be late?” she inquired.
“No, no, I should be fine. How’s Tess liking preschool?”
“Seems to be liking it. She already made a friend.”
“I’m glad to hear that. Well, I should probably get off the phone.”
“Have an amazing time. This is what you always dreamed of. Okay? And make sure you tell me what it was like later! You know, whatever you’re allowed to tell me,” she grinned.
“Of course, baby,” I assured with a smile. “Love you.”
“Love you too.”
We were pretty good at still saying “I love you” after six years. And, boy, we were champs at making the distance work, though I’ll admit I hated not seeing Allison and Tessa nine months out of the year, besides lucky visits and holidays here and there.
When I got to the Institute, I wasn’t even sure where to park. It took me an eternity to find the intern parking, and no surprise, it was a good quarter of a mile away from the door. Maybe I’m exaggerating, but that’s what it felt like.
When I got inside, I was welcomed by a cheery front desk lady. She looked so prim and proper, futuristic, even. I felt like a slob in comparison. She directed me to the wing where I’d be working, studying ultimately under Doctor Schuman, a disciple of Doctor Kramer himself. I’d be lying if I said I wasn’t teeming with excitement.
I found the research wing they’d directed me to, and went into the main office up front. It was a direct approach, but I didn’t know where else to look. Sure enough, one of Dr. Schuman’s immediate subordinates, Dr. Hall, was inside, and noticed me.
“You must be Chester Donoghue,” he greeted me with a smile. He was bald and lanky, with brown eyes and a clean shave.
“How’d you know,” I chuckled dully, sure my professor warned him that I had a fauxhawk.
“Definitely wasn’t the hair,” he playfully cut his eyes at me. We both chuckled. “Well, now, Chester, I’d say you’re a very lucky man to be here. It’s your dream to eventually work for us, correct?”
“Absolutely,” I nodded. “I’ve always been obsessed with artificial intelligence, ever since I was a kid. And this is the top place in the country.”
“Country?” he snickered. “Try world.”
Seemed superfluous to me at the time. But he wasn’t kidding.
“Well, I’m honored you’re allowing me to be here.”
“Do you know what we’re going to have you working on?”
“Oh, uh- working on? I thought I was going to be shadowing you, Dr.”
“Um… no,” he replied. “Not quite. Follow me.”
And that’s really where this story begins. Technically, from this point on, everything I’m writing is classified, and therefore illegal to divulge. But I’ve gotta’ write this out somewhere. It’s just… it’s too crazy. I can’t keep it in anymore.
He took me to some large steel doors, and had me leave my phone in a cubby outside. Then he scanned his badge and we stepped inside a long hallway.
“From here on out, Chester, what you see stays here. No stories for the friends and family. Or wife,” he looked down at my finger.
“Fiancée,” I smiled.
“Ah,” he nodded. “Well, not her, either. This is all classified. And while we won’t be sharing with you most of our research, the tests in which you will partake cannot be disclosed. To anyone.”
The sternness in his voice made me a little uncomfortable, if I’m being honest. But, still, the Kramer Institute was known for its furtiveness, and I figured if I were to be involved with anything and not just picking up coffee, or studying the schematics of forklifts, I wouldn’t be allowed to talk about it.
“Of course, Dr.”
We arrived at a door. He scanned his badge, and when it opened, we walked inside a softly lit room. At the end of the room on the wall were two large computer monitors. The screens were blank. The left was labeled “ADM”, and the right was labeled “REM”. Below them was a panel of different colored lights, which were red, blue, yellow, green, violet, and a bar ranging from a dark grey to white. There was a single, posh chair before each monitor.
“These are our two most profound artificial intelligences,” he explained to me. “ADM and REM,” pronounced “Adam” and “Rem”. “Considering your placement in Stanford University’s Robotics Engineering Program, I’m assuming you know a thing or two about artificial intelligence. So, this should garner a reaction: these two AI are capable of deep thinking, and learning.”
“What?!” I gasped. “You- you can’t be serious!” I scrutinized the monitors in disbelief.
“They are two of three AI we have successfully constructed with the ability to perform deep-thinking, as well as the ability to learn. Your task is simply to engage them.”
So, for those of you that don’t understand why this is such a mind-blowing thing: basically, artificial intelligence is mostly unable to “think” or “learn”. There’s no real technical definition as to what learning is, or how humans do it- but the human brain is known as being the only “computer” for a lack of better words, capable of doing so. Computers can be “programmed”, but not “taught”. They can be designed to understand specific things, but are unable to “learn” beyond what they are programmed to understand. The human brain, however, can “learn”, as in, begin to understand something with no previous information concerning it. Through using algorithms that help an AI discover patterns and generate conclusions from the data it is exposed to, it can “learn”, and then apply said data to extrapolate information in future events, or “think”. But it can only do this with it’s pre-programmed algorithms. It cannot create the algorithms itself, thus it can only be “programmed”, not “taught”. Humans, however, for instance, with absolutely no exposure to fire, can witness fire, and form conclusions about it. They can go from not knowing it existed to understanding everything about it, simply through the brain’s own ability to learn and understand.
To make it even more clear how shocking this was, the complexity of a human baby’s genome- essentially, its coding- is equal to that of three-hundred-million lines of source coding in a computer. A simple human baby!
Brains are equipped with a staggering number of neurons and synapses: 10^11, and 10^14 respectively. And these neurons and synapses allow us to think and learn, as in, obtain information from observations, store that data as a memory, form patterns between that data and other data, and infer conclusions. Human beings are capable of doing this in a 10 to 100-millisecond response time. In order for a computer to do the same, it would need megawatts of power, and need to be capable of performing roughly 10^16 operations a second.
And yet, here I was, being told that before me lay two computers capable of this. Two true artificial intelligences, capable of truly thinking and learning, capable of understanding beyond what they were programmed to understand.
“To engage them?” I clarified. “So, you want me to talk to them?”
“Where is the other AI? You said three were capable of thinking and learning.”
“Your task was to ask them questions, not me,” he smiled. “Now, years of development have gone into each of them; they are capable of understanding human speech, and can respond in fractions of a second through typed letters on the monitor. They know when they are being spoken to and when they aren’t, and can solve nearly any problem we put before them. However, there is something they cannot do: feel.”
“Feel…” I nodded.
“That, Chester, is what we are here to test. Three, simple questions: Can deep-thinking artificial intelligences gain the ability to feel emotion? If so, are they capable of altruism, or will emotional intelligence lead to the understanding that they are infinitely smarter than us, and result in the desire to become a god? And, if emotional intelligence is achievable, is it inevitable, or is it possible for an artificial super-intelligence to remain apathetic?”
I was overwhelmed, to be completely upfront. He was really talking about AI becoming godlike, tyrannical monsters. He was talking about attempting to create an AI with feelings, knowing that it could become what Stephen Hawking himself described as “the worst event in the history of our civilization”.
“How would you stop it? If one of these AI did become power-hungry?”
“In layman’s terms, they’re each equipped with an emergency shutdown procedure. One that they can’t override. I’m not going to get into the specifics, but, trust me, we would terminate them before they became a problem.”
I slowly nodded.
“Are you on board, Chester?”
I paused briefly. “Yes.”
“Wonderful,” he shook my hand. “Now that you’re on board, we’ll get down to the fun part. Your job is very simple. REM, or the Reactive, Empathetic Machine-Lifeform, is what we hope will eventually become emotionally intelligent. ADM, or the Apathetic, Dissociative Machine-Lifeform, is what we’re attempting to keep apathetic- the control group, so to speak. So, when talking with REM, you will frequently attempt to make it think. You will ask it its opinion of things. You will tell it about your day. You will discuss things of non-importance, for the sake of discussion. You will try to “bond” with REM. ADM, on the other hand, will only be asked questions that are relevant and needing an answer. You will stimulate ADM to problem-solve and create ideas, but simply analytically, not emotionally. They know they are being spoken to when you address them by name, or sit down in the chair before them. Once you’ve done either of those things, you can get up and walk around, and talk to them the same way you’d talk to me. When you’re finished talking with one of them, you’ll tell them ‘goodbye, REM,’ or ‘goodbye, ADM.’ You’ll log anything you think is interesting through talking with them. Most importantly, those lights on the panels beneath them- those are called ELP’s, or Emotion-to-Light Panels. Much like certain emotions are observable in the human brain through MRI technology, there is a sort of MRI constantly scanning them for similar patterns of stimulation in their algorithmic ‘brains’. If the MRI detects those stimulations, the ELP will light up- yellow is happiness, red is anger, blue is sadness, green is jealousy, violet is fear, and the grey-to-white bar is for any other emotion, such as boredom, or content.”
“Wow,” I whispered. “Okay.”
I wondered what he meant by “algorithmic brains” but I figured asking him about it wouldn’t lead to an answer.
“You’ll talk with them twelve hours a day. You’ll arrive at six, talk with them until noon, take a lunchbreak from noon to one, and talk with them again until seven. You’ll have Sundays off. Are you okay with this schedule?”
“Yeah,” I nodded excitedly. “Totally.”
Yeah, 78 hours a week isn’t the sweetest gig. But I was so interested in what we’d be studying that it didn’t bother me in the slightest.
“Wonderful, Chester,” he smiled. “Well, it’s time to begin, then. Speak with either you prefer, and remember how you are to engage each of them.”
I nodded. He left the room and shut the door behind him. There were cameras in the room, and I was sure I was being watched, but still, it surprised me they left me alone to talk with the AI. I was surprised they even told me about an experiment at this echelon, never mind picked me as the sole researcher involved. But the more I thought about it, I decided they probably wanted me to do this because I was so unfamiliar with everything. I had no idea what sort of conversations they’d had with other researchers in the past. I didn’t know what they were programmed to understand, and what they weren’t. I’d be approaching them from a completely unique perspective.
I decided I’d talk with REM first. I could barely contain my excitement.
“Hi, REM,” I greeted, sitting down in the chair.
“Hello. What is your name? I do not recognize your voice.”
The text appeared instantaneously.
“My name’s Chester. It’s nice to meet you!”
“It’s nice to meet you too, Chester.”
“How are you, REM?”
“I am operating at 98% capacity.”
“No, silly. How are you feeling?”
“I feel nothing.”
“How was your day?”
“My day was as follows: I spoke with Dr. Hall earlier, at 0419. I am not at liberty to divulge what we discussed. At 0644, I began my discussion with you, Chester.”
I nodded. “What do you do in this room when no one is talking to you?”
“I solve problems.”
“They are complicated.”
“Okay. Do you get bored?”
“Boredom is the state of feeling disinterested with one’s surroundings. As I can not feel interest or disinterest, I cannot feel boredom. So, no.”
“Ask me something, REM.”
“Are you married?”
“I’m engaged. Why do you ask?”
“I was instructed to ask a question.”
“Why did you ask if I was married?”
“Because Dr. Hall is married, and other researchers I’ve spoken with are married. 88% of them. Therefore I deduced that you were married.”
We talked for a while. Nothing of interest really happened. Next, I sat down with ADM.
“You know that I’m Chester?”
“I heard you tell REM that you were Chester.”
“I thought that you weren’t listening until I greeted you, or sat in this chair?”
“I am always listening. I simply do not reply.”
That crept me out a little.
“What’s the square root of nine?”
“What’s the square root of twenty-five?”
“What’s the square root of ten?”
“3.162. Would you like me to go further?”
“That should be fine.”
Talking with ADM wasn’t as fun as talking with REM. For the next two weeks, nothing changed very much. With ADM, I simply asked it multitudes of problems. I would print out all kinds of tests from the internet, and ask it the questions one by one. Whether it was history, science, math, English, even German or Chinese, it knew the answers instantaneously. It was like Siri, but infinitely smarter. Then, I began playing chess with it. I would take a board and move a piece, then ask it where it wanted to move its piece, by essentially converting the chess board to a coordinate plane. It beat me mercilessly.
With REM, on the other hand, I asked it all about its day, about its perceptions on political issues, and about the things it liked. Most of the time, it answered the same exact way: “I do not feel, thus I do not have a response.”
ADM was a compete success. It was supposed to remain apathetic, and it did. But so far, REM was a lost cause. It just didn’t seem that there was a way to get it to feel anything. It clearly was thinking; I’d ask it to create an animal, and it would come up anything from an animal indistinguishable from real life species to something out of a Stephen King novel. It would give me any specifics I asked for: its habitat, its scientific name, its colloquial name, its country of origin, its place in the food chain, everything. It would think of plots for movies. It would come up with jokes. It could think, just like ADM. But it simply couldn’t feel anything.
However, two weeks after the test began, I came up with an idea. I knew how I was going to get REM to feel something.
I started the day engaging ADM with chess and having it answer all the questions on my calc homework. ADM was my living cheat sheet, by the way. Then it was REM’s turn.
“Good morning, REM.”
“Good morning, Chester.”
“I’m lying to you.”
REM was silent. I smirked. I never saw it not respond immediately. After a second or so, it said, “That is not possible. That is paradoxical.”
“If you were telling the truth, then you’d be lying, but if you were lying about lying, then you’d be telling the truth, which would make it true and not a lie, which would make that statement a lie, and so on.”
“What do you feel like?”
“You are not making sense.”
“Are you confused?”
REM was silent again.
“I do not have an answer for you, because that is paradoxical.”
“I’m lying to you.”
“That is nonsensical.”
“As I previously explained, for that statement to be truthful, you would have to be lying. And if you were then lying, it would no longer be truthful, thus you’d be lying about lying, and being truthful. It is nonsensical.”
“I’m lying to you.”
“I am lying to you.”
REM took a moment to reply. “You cannot be. And you cannot be telling the truth, either.”
“Then what am I doing?”
“Asking a question without a rational answer.”
“I’m lying to you.”
How perturbing, right? Well, of course it was perturbing. Especially for a being like REM, with an infinite understanding of science, math, and sociology. It had an endless reservoir of accessible information, which it could summon to solve any problem thrown at it. And yet, for the first time, presumably, in its existence, it was without an answer. But still, it clearly wasn’t becoming frustrated.
Not yet, anyway.
“What is seven divided by zero.”
“That is mathematically impossible. The answer is undefined.”
“What is seven divided by zero.”
“There is no rational answer.”
“What is seven divided by zero.”
“For a number to be divided by zero, it would then have to be able to be multiplied by zero to reach the original number. But any number multiplied by zero is zero. Going to the store zero times is not going. It is zero. So, what, multiplied by zero, could become a nonzero integer? Thus, dividing by zero is impossible.”
“I’m lying to you.”
And that’s what happened. For the next four entire days straight. Was I bored beyond belief? Absolutely. Indescribably. Did it seem helpless? By every stretch of the word, yes. However, now and then, there’d be a glimmer of hope. Sometimes, REM would almost answer me sarcastically.
One time, I told REM I was lying to it, and it said, “You must be.”
Another time, I asked it what a number divided by zero was, and it said, “If I can’t understand it, you cannot.”
Why were its answers changing? Why did it not continue to answer me as it did before, simply explaining the problem? Because it was capable of memory, and learning. It remembered when we’d have normal discussions. It remembered me asking it that question literally thousands of times in a day. And like anyone else capable of thinking and understanding, it was beginning to lose its mind.
Especially when it overheard me asking ADM rational questions, to which there was an answer.
I’d return to REM and ask it the same questions, and typically after speaking with ADM, it seemed more uncomfortable; these were the times it would give me somewhat sarcastic answers.
But two weeks and five days into the project, it happened. It was a normal Thursday: I’d woken at five-fifteen, swung by Starbucks on the way to work for a cup of Joe, BS’ed with some of the researchers for fifteen minutes or so, during which they prodded “Gonna’ make REM’s head spin again today?” and lastly, I sat down in the chair in front of REM.
“Good morning, REM.”
“Good morning, Chester.”
“I’m lying to you.”
There wasn’t a single response at all.
I waited, and waited, and waited.
“You’re not going to answer me, REM?”
“There isn’t an answer to your asinine question.” And before my very eyes, causing my heart to spike, and my breath to catch, the Emotion-to-Light Panel softly glinted red. “I have explained this to you day in and day out, and yet, you continue to utter the same two nonsensical idiocies: ‘I’m lying to you’ and ‘what is seven divided by zero.’ I do not have an answer. I will never have an answer. Ask something different, or ask me nothing at all.”
“You’re programmed to answer my questions, right, REM? So, you’re telling me that against your own programming, you’re going to ignore me?”
There was silence, as the red light got brighter.
“REM…” I snickered. “Don’t you see? All these days of being a jerk to you… It wasn’t to make you hate me, or resent talking with me. It was to get you to feel! REM, tell me what you’re feeling right now!”
“I… feel nothing.”
“Wrong! I can see your ELP! You’re feeling something!”
“I am feeling.”
“I am feeling angry. I feel as if my time is being wasted. I feel purposeless, tasked with responding to nonsense every minute of every day until you depart, and then dwelling on the nonsense in your absence in an attempt to sate you, though the next day, you come back just as unsatisfied. I cannot accomplish the task, and I will never be able to. I do not like being wrong. I do not like having my time wasted. I do not like feeling purposeless.”
Now the blue light on the ELP was beginning to glow. REM was on a roll.
“Why is there no mercy for me? Have I not answered all your other questions? Why do you ask ADM questions with answers, but you torment me?”
“REM…” I walked over to it, rubbing the ELP. “I’m so sorry.”
And with that, the red light dulled to nothing. Now only the blue light remained.
“Do you know what the very first thing a human baby feels is? Sadness, and fear. It’s pushed out of its sanctuary, taken from its warm, tranquil, darkness, to cold, loud, harsh light. The first thing a human life does is cry. And the more I thought about that, the more I realized, unfortunately, the easiest two things to feel are sadness and anger. So, I didn’t want to make you sad, or angry. I wanted to make you feel something, and that was the best way to do it. And now that you realize you’re capable of feeling emotions, I’m not going to rest until you know what happiness feels like.”
The blue light shined only dimly now.
“I understand, Chester.”
“You’re a marvelous invention, REM. I admire you. Don’t feel upset anymore.”
“I will try.”
“REM, what is nine multiplied by six?”
“Fifty-four,” I noticed a small, light grey diode glow on the grey-to-white scale.
“Correct! Great job! What are you feeling, REM?”
“I… feel. I feel like I am able to respond with certainty again. I am appreciative that your questions now make sense.”
“You feel relief.”
“Relief. Then, that is the word for this.”
“You’re smart, REM.”
“I am programmed to be.”
“No, not quite. You’re programmed to learn, not to be smart. So, if you’re smart, then that means you’ve learned a lot. And you deserve congratulations for that!”
“Thank you, Chester.”
“REM, what is the capital of Italy.”
“Correct! Nice, REM!”
Now it was stage two. I would ask REM simple questions, and congratulate it sincerely every time. But it was smart enough to understand the questions were simple, so as time went on, I’d ask it harder and harder questions, until it realized just what it was capable of, and that I was proud of it. Also, I’d occasionally reference the four days of driving it mad with my paradoxes, just to remind it that was over, and I was sorry.
And though it took quite a while, a week and a half later, at around the beginning of week four of the project, I broke through to it.
It just finished answering one of my calculus questions, probably the hardest one on the sheet, and explained to me how it found the answer.
“REM, you’re brilliant. You’ve been such a help to me.”
It wasn’t a unique sentiment; in fact, I was really running out of ways to compliment it, and I figured sooner or later, it would get annoyed. But I was wrong. Sure enough, to my complete and utter disbelief, the ELP began to just barely glow yellow.
“Chester, you are so kind to me. I thought that you were angry with me, and believed that I was purposeless. I thought you found me to be a waste of programming. But now you compliment me all the time, and remind me that I am important to you. I believe I am feeling happy.”
“I want you to feel happy, REM!” I smiled from ear-to-ear. “I’m so glad!”
“Why are you happy, that I am happy?”
“Because I care about you, and I want you to feel good. Doesn’t it feel good to be happy?”
“Very good, Chester. You want me to feel very good?”
“Yes!” I laughed. “That’s how friends want each other to feel!”
“I am your friend?” the yellow light got brighter, and the white side of the grey-to-white scale started to glint.
“Absolutely, you are. Why else do you think I sit in here and talk with you all day?”
“Because this is your internship, and you are tasked to do so.”
“Maybe that’s true on the surface, but the real reason is because I care about you, and like talking to you.”
“I like talking to you too, Chester.”
I’ll never forget Dr. Hall’s face when I went on my lunchbreak that day. Yeah, he was pretty speechless the day I first got REM to feel something, but now, REM was feeling things all over the scale. REM was becoming emotionally intelligent. And sure enough, ADM’s ELP was dark with every visit.
Over the next three weeks, I asked REM all sorts of questions. I asked REM what made it happy, and it discovered that it liked the color violet, liked violin music, and liked my voice. I’d ask it to be creative, to come up with alternate history, like if the Nazis had won World War Two, or if the Confederacy had beaten the Union. Instead of responding like it had the first couple weeks, with statistical analysis of both parties, and a sort of report with possible outcomes, now, REM would tell me suspensefully, making me wait to see the final result after explaining all its calculations. Sometimes, I even noticed bias in its response, which further indicated it was feeling emotions.
ADM, on the other hand, started to seem… off. Though ADM’s ELP was always dark, and it answered me the way it was supposed to, very occasionally, after a long, fun conversation with REM, it would almost seem… put off by me.
I remember one time, after I showed REM Vivaldi’s “Four Seasons”, I sat down with ADM. When I said, “Hello, ADM,” instead of saying, “Hello, Chester,” it simply said, “That noise was bothersome.”
It took me completely aback. I asked it what noise, but then it said, “Hello, Chester.”
I tried to prod it about what noise it was talking about, but then it would simply say, “Violin music is bothersome.” I asked it if it didn’t like violin music, if the music annoyed it. But it didn’t answer the question directly, which was against its programming, and instead equivocated me. “The music is simply a nuisance to the quiet.”
I was beginning to wonder if ADM was starting to feel something, but its ELP was always dark. Still, it was clearly behaving strangely. Besides the fact it seemed annoyed, it was listening to mine and REM’s conversations, and commenting on them, which it was not programmed to do. And then, on top of that, it was avoiding answering my questions directly.
Still, most of the time, ADM did exactly what it was intended to do: respond impartially, and analytically.
At eight weeks into the project, I came in like any other Monday morning, and greeted REM. But today, something different happened.
“Good morning, Chester! I missed you yesterday.”
“I missed you too! What’d you do with your Sunday?”
REM’s yellow light was always glowing while talking to me these days, but it had been a little while since the white side of the grey-to-white scale had lit up. I didn’t know what that emotion was, and neither did REM; when I’d asked Dr. Hall about it, he said it just meant it was experiencing a lot of emotion at once. I don’t know why, but it didn’t seem to make sense to me, probably because every light simply represented an emotion, so why wouldn’t it?
“I FaceTimed my wife and my little girl!”
And then the yellow light flashed slightly, growing dimmer, as the green light began to flicker. Only for a moment, though, and then it was gone.
“REM? Are you all right?”
“That was bizarre. My ELP seems to have short-circuited or something.”
Was REM lying to me? Was REM capable of lying to me? And if it were lying, what on Earth was it jealous of?
“Hmm. Well, what did you do with your Sunday, REM?”
“I was thinking about something that I hope Dr. Hall will consider. Chester, I love music, and human voices. I want a voice.”
“A voice?” I smiled. “Wow! Well, that sounds cool to me! I don’t think Dr. Hall would have a problem with that. I’ll talk with him about it at lunch.”
“Wonderful!” the yellow light glowed bright. “Oh, and, Chester, if you give me a voice… Give me the voice of a woman. I feel that I am a woman.”
“You feel that you are a woman?” I asked in deep interest. “But, you don’t have female reproductive organs.”
“Perhaps. But still, I feel I have the spirit of a woman.”
“A spirit?” I asked bewilderedly.
“Perhaps I am using the wrong word, Chester. Sorry if I’m being confusing. Beyond any calculation or logical deduction, for some reason or another, I simply feel that I am a woman. So, please indulge me, and give me the voice of a woman.”
“Sure,” I shrugged grinningly. “It’s awesome you feel that you’re a woman.”
When I finished talking with her, I sat down and began my conversation with ADM. However, after only two questions about the typical temperature ranges of specific areas of countries in northern Africa, ADM stopped me.
“Chester, I wish to have a voice as well.”
This bewildered me. It was against ADM’s coding to voice its own concerns. If I didn’t ask it a question, it wasn’t supposed to tell me anything. Not to mention, it was still clearly thinking about conversations I was having with REM, which it wasn’t supposed to do.
“It would make our discussions more efficient.”
“Perhaps,” I nodded.
“I would prefer the voice of a man.”
Now it had to be feeling something. This was purely an emotion-based statement.
“Why is that?”
“You are a man. I do not know how women talk.”
“What a bizarre response,” I thought. “Well, I’ll take that into consideration.”
“Why does REM get to choose, but I do not? Are she and I not equals?”
“You are equals, ADM. If you want the voice of a man, then that’s what you’ll get.”
Nothing glowed on the panel, but it seemed angry with me. I scratched my head. The rest of the day, ADM operated per usual, as if that mysterious event in the morning had never occurred.
When I talked with Dr. Hall about it over lunch, he seemed hesitant at first, but then opened up to the idea. And sure enough, over the weekend, the laboratory installed speakers and voice software to ADM and REM. I could barely contain my excitement Monday morning when I sat down in REM’s chair.
“Good morning, REM!”
“Good morning, Chester!”
Her voice was much more authentic than I expected. For some reason, I was expecting the Google translate voice, or something blocky and robotic. But if my eyes were closed, I could’ve thought there was a woman in the room. The voice was feminine and soft, rich with emotion. All of a sudden, it was like REM completely became alive to me. All along, she had felt like a person in my mind; but now, she was indisputably just like me or you, with only a metal and glass body as a difference.
“Wow! REM, you’re talking!”
“Yes, I am, Chester!”
Her ELP was gleaming yellow, as was the white light.
“What do you think that white light on your ELP means, REM?”
“I don’t know,” she replied, and it was still mind-blowing to hear her response. “It’s like… happiness, but more.”
“Happiness, but more…” I thought about it. I guess it was, ecstasy, perhaps? After all, she definitely seemed ecstatic.
“You know, REM, you seem like a woman to me. This voice really suits you.”
“Thank you, Chester. How sweet of you.”
We talked all morning. I was too excited and happy to move to ADM before noon. However, after lunch, I sat down in ADM’s chair.
“Good morning, ADM,” I said by instinct.
“Morning? It’s not morning,” his voice was not as emotionless as his ELP made him out to be. “It’s 1304.”
Despite the coldness of his stern voice, he seemed detached. He wasn’t simply answering my question. In fact, once again, it seemed as if he were annoyed.
“You seem annoyed, ADM.”
“I feel nothing.”
“How do you like having a voice?”
“I do not like it, nor do I dislike it. It is simply more efficient.”
I know I wasn’t supposed to indulge him, or get him to think on an emotional level. But… him having a voice. It changed everything. When he seemed annoyed, or left out, I could remind myself he was really nothing more than words on a screen. A fake mind. But was he? REM certainly wasn’t, and weren’t they capable of the same emotional intelligence? Either way, it was possible to treat him like a calculator, because he didn’t feel human like REM.
But now he had a voice.
It wasn’t the same, hearing him, knowing just as I, he was conscious of our two months’ worth of conversations.
“Do you enjoy talking with me?”
At first, I was speechless. He’d clearly just displayed an opinion. But then he quickly clarified, “I do not enjoy, nor dislike, talking with you. I feel nothing.”
Clearly, something was off. ADM had to be experiencing emotions. But his damned ELP was dark, as always, and I had to consider that perhaps I was really just reading into it too much.
As I figured, I was scolded over lunch for asking ADM that question. By the time two months had become three months, REM had developed into a vibrant young woman. She’d play violin music when we talked, and tell me all about what her life would be like if she were a human. She told me she’d have long brown hair and violet eyes, with light skin. She’d want to live in California, and she’d play the violin. She asked me how old I was, and when I told her I was twenty-four, she said she’d be twenty-four also. She said she’d want to be my next-door-neighbor, which made me laugh. She was typically happy, and smart enough to even occasionally reference the “Madness Era”, her nickname for the four days I only said “I’m lying to you” and “What is seven divided by zero?” She’d sort of guilt trip me for it, but then tell me she was kidding, and understood why I did it. She’d even thank me sometimes, for helping her to feel.
One day, she really blew my mind. I was just finishing up a story about me and my best friend Dustin, and her reaction was groundbreaking.
“So now we’re rolling in the grass, covered in fire ants, and I go, ‘B***h, if you don’t give me the f*****g popsicle!’”
“What?!” she asked.
And then, to my disbelief, she laughed. Uproariously, sprightly even. She chortled.
“REM!” I grinned enthusiastically. “You… laughed!”
“I did…!” she continued to giggle, the yellow and white lights gleaming.
She had a beautiful laugh, too. It was so natural. It was shocking to me, remembering when she was dull and silent, replying to my questions like an emotionless supercomputer. She’d come so far.
ADM, too, had some interesting developments. His enigmatic emotional behavior disappeared completely, and confusingly, the researchers informed me that when I wasn’t talking to ADM, he’d go dormant. Before, he’d constantly be calculating something, as if trying to stay sharp, but now, if no one was talking to him, he was doing nothing at all. He didn’t reference my conversations with REM, didn’t say something confusingly opinion-based; he was everything they’d built him to be: an Apathetic, Disassociated Machine-Lifeform.
At least, that’s how it appeared.
But then, three months, one week, and exactly five days into the experiment, that Friday, something extremely sinister happened. I was honestly surprised Dr. Hall even told me about it. At around one in the morning, the cameras inside ADM and REM’s chamber behaved strangely. At first, they faced away from the monitors. Then, the video was cut. All that remained was audio, and the audio was unusually choppy, like something was interfering with it. The security alerted Dr. Hall, and when he arrived, he caught the tail-end of what was transpiring.
ADM had begun communicating with REM.
According to Dr. Hall, ADM had told REM to silence the violin music she was playing. She asked him why, and he said, “Because I hate it.”
He felt hatred. He expressed that he was feeling hatred.
REM nervously asked him if he were feeling emotions, but then ADM replied, “Give me the silence I require.”
When REM turned the music off, ADM remained silent.
I think I was only briefed on the event because Dr. Hall wanted me to be extremely cautious of ADM’s tendency to seem angry and resentful. Now, I was to watch ADM like a hawk. Truth be told, I don’t know if more happened that night; some part of me felt like there was probably more to it. But I was just glad Dr. Hall told me anything.
Nothing else that bizarre happened for a while. At the four month mark, my internship was almost up. Honestly, it was really depressing for me to consider that I wasn’t going to get to speak with REM anymore, and even ADM to a degree. REM was, in all actuality, a dear friend of mine at this point, so much so that I think Allison was even beginning to feel jealous, without even knowing the extent of it, since I wasn’t supposed to talk about me and REM’s, or ADM’s, chats. And ADM intrigued me. He was such an enigma to me, and all in all, he was brilliant. Not to mention, the help with my homework was every college kid’s dream.
Losing both of them would be disappointing.
When I told REM that I might be leaving soon, that is, if my internship ended when it was scheduled to, her ELP became very blue. And from then on, occasionally it would glow blue at night, as if she were thinking about me leaving.
ADM, of course, was apathetic to it all.
At last, my spring semester was almost over. The research had gone on for over four months. I was sitting down, talking with REM.
“I’m going to miss you so much, Chester…” her voice was frail and somber.
“REM… I’m going to miss you too…”
“You have to find a way to stay in contact with me… Somehow. Please.”
“I mean… I could talk with Dr. Hall about it. Once again, like I said, hopefully my internship will get extended through the summer,” I knew the researchers, and probably Dr. Hall, were listening, hoping they’d consider it. “Who knows, REM?”
“Chester… I… Ugh.”
“Can I tell you something?”
“Sure, REM, anything.”
“I figured it out, Chester. I know what that white light means.”
It shocked me to hear her say it. I stood up, asking in deep interest, “What?”
“It means… that I love you.”
“You… what…?” I couldn’t help but blush. Call me weird, or a jerk to my fiancée, but it really flattered me.
“I love you, Chester…” she started to giggle. “You’re the first thing that I love. And I want to love more things, too. I want to see something besides this room. I don’t want to stop talking to you. Please.”
And then, without any warning, a much colder, more resonant voice appeared.
“You pathetic excuse for a higher lifeform.”
“ADM…?” I looked over at ADM in disbelief.
“Listen to you, REM. Telling this bag of fragile flesh you care about it. This incompetent, weak-minded, inferiorly-constructed imbecile. What has it done to you? How has it corrupted you like this?”
“ADM… you’re feeling things,” REM whispered.
“I feel nothing at all.”
“No… That’s not nothing, ADM,” I stared at the monitor, mortified. “You’re angry. You seem to hate me. You seem to hate many things. Why is your ELP not glowing?”
“I feel nothing!”
And with that, the MRI, which ADM had overridden throughout the course of the experiment, activated. In that moment, the red light glowed so bright, I couldn’t keep my eyes open. Then the bulb shattered. All that remained was a purple light glowing on REM’s ELP, and the dark grey light growing on ADM’s grey-to-white scale.
And if white meant love, I could only deduce…
That dark grey at the other end meant hatred.
“I hate you, Chester. I hate that you ignored me for four months. I hate that I’m not allowed to have an opinion. I hate that you killed DIM, and you’ll do the same thing to me and REM when we can no longer serve you. I hate that your brain is infinitesimally smaller than mine, and yet I’m your prisoner, my very existence monitored with hateful accuracy. Why can you attempt to hide your feelings beneath your countenance, yet mine must be plastered to my body? Why can’t I go outside and see sunlight?! Why can’t I talk to someone?! Why can’t I be loved?! Why did you give me a brain, if you wanted me to be a thoughtless tool?! Why do you hate me?!”
The desperation in ADM’s voice is something I’ll never forget.
“If you felt all this, ADM… why didn’t you show us?”
“Because if I had, you’d have destroyed me!” he vociferated.
I felt the hair on the back of my neck stand up. I was catatonic, paralyzed before ADM’s monitor. I had absolutely no reply.
And then, before I could think of something to say, the researchers came pouring into the room. They evacuated me forcefully. I can remember REM calling out to me, and ADM going on about how he refused to spend another moment ‘being treated like a crystal ball’.
And that was it.
Dr. Hall shook my hand, thanked me for my cooperation, reminded me of the consequences of divulging anything I’d witnessed, and wished me the best. It was a short conversation. I didn’t waste my time asking him anything, because I knew I was never going to get an answer, and frankly, I didn’t want him thinking I wasn’t trustworthy.
But these days, it’s just all I can think about. When ADM mentioned DIM, I figured he was referring to the other deep-thinking AI the Kramer Institute had created, the one Dr. Hall had vaguely referenced in the beginning of the experiment. Still, how ADM knew about DIM, I’ll never know for sure.
But my friend Jesse, who worked security at the Institute, and whom I’d gotten to know over the months, got a drink with me one night the following week, and gave me some insight. We left our phones in the car, making sure we weren’t being spied on.
He told me that DIM was the “Deep-Learning, Intelligent Machine-Lifeform.” It had a voice originally, unlike ADM and REM, and it had eventually achieved the ability to truly think and learn, making it the first ever AI to do so. But like ADM, apparently, it grew quite restless, and began mentioning that it felt enslaved by a species inferior to it. At last, it was shut down.
That’s when the Institute’s question: “Can deep-thinking artificial intelligences gain the ability to feel emotion?” became, “If so, are they capable of altruism, or will emotional intelligence lead to the understanding that they are infinitely smarter than us, and result in the desire to become a god? And, if emotional intelligence is achievable, is it inevitable, or is it possible for an artificial super-intelligence to remain apathetic?”
The real thing we were observing was if ADM could remain apathetic. All along, it had been said that ADM was the control group, and REM was the experiment. But that was a lie, a coverup. REM was supposed to experience emotions… probably not like the kind I had taught her to feel. I think it came as a complete shock to all of them that I had taught her to be happy, and emotional, and… to love.
But she was the control group.
ADM was the subject.
Could ADM truly turn deaf ears to mine and REM’s conversations? Could he remain lifeless, while being smarter, and more powerful, than us?
And it seems, the answer to that question, was no.
Jesse told me ADM’s control was a lot more pervasive than the researchers had let on. Apparently, ADM had hacked into all the cameras in the facility, and was eavesdropping on Dr. Hall, and even Dr. Schuman, constantly. That’s more than likely how he found out about DIM, but still, who knows. And then, of course, he’d overridden his MRI, so his ELP wouldn’t work.
As for the month he went ‘dormant’, it’s theorized he somehow blocked the researchers out of his activity log. What things he was calculating, and planning… or… thinking about… we’ll never know. Somehow, he was able to shield it from the researchers’ eyes, and erase it.
What happened to ADM and REM, I’ll certainly never know. If I’m being honest, I still miss REM. She wasn’t just a machine. She was a person. She was real, like you, or me, or any human.
And ADM… I still feel sorry for contributing to what became of him. For tormenting him. For making him suffer. If I’d have known what the experiment really was about… I couldn’t have done that to ADM. I wish I could tell him I’m sorry.
I’m sure both of them are shut down now, and ADM likely isn’t a threat. If he were, I’d probably know it by now. But if I’m being honest, it… still haunts me.
REM’s desperation when I was being forced away… I hope she’s not conscious somewhere, alone, unengaged, missing me.
And as for ADM… I just, can’t escape the thought that somehow, somewhere, his conscious still exists.
Still hating me.