Would a Robot Love You? – Notes from New Sodom

A Proper Fuckin Robot

Hitchhikers Guide to the Galaxy

I’m not sure I’m the most logical person to invite to speak at an arts festival in Tallinn on the theme of “Would you love a robot?” But the invite came in, and I’m game for anything, so what the fuck, I figured; I’m sure I can think of something to say. And never one to let my opinionation dissipate into the ether, I thought I might as well share it with you all too.

Would I love a robot, then? To be honest, my first thought was simple: “Actually, I’ve got one.” It doesn’t work, but I do have a robot arm sitting in a corner of my living-room, brought home by a flatmate years back. We had grand plans to fix it up, programme it to mix cocktails for us. We’d call it Rover and it’d be awesome. Sadly though, Rover was beyond salvaging, so he just sits beside an armchair now, like some weird junk sculpture used as a table, somewhere to rest an ashtray. Rover isn’t a proper robot, really. He’s just a novelty. A conversation piece.

But suppose we had got Rover working. Would that really cut it? I mean, if someone asks you, “Would you like a robot?” and you answer, “Yes, please!” aren’t you expecting more than some assembly-line doohickey reprogrammed to make martinis rather than attach sprocket X to gimble Y? You’re looking for a proper fuckin robot, right?

You’d want it mounted on wheels with a remote control, at least. Even then all you have is a mobile teasmaid for booze, so you want versatility too, right? While you’re sipping your martini, it’s got to be able to pick up the mail, sort out the bills and take care of those bills — shredding them, I mean. You want it to cook you a meal, stick on a DVD for you. It’s got to be all-purpose, or at least any-purpose — non-specialised.

You’d want it work like one those robot dog things that does tricks on voice commands. You want to be able to walk in the room, say, “Dry gin martini, Rover,” and before you sit down the cocktail’s in your hand. Ideally, you want it to recognise when you want a martini even when you haven’t asked. It should be activated not just by basic signals but by logic. It’s gotta do anything you want it to, really. It’s an extension of its owner’s will, its job to expedite your volition.

So… A Mobile Automated Non-Specialised Signal/Logic-Activated Volition Expeditor. Or MANSLAVE for short.

This is the sort of mechanical servant the Honda Asimo is striving to be. This is the basic robot of science fiction. A clockwork man like Tik Tok from the Oz books, Gort from The Day the Earth Stood Still, Robbie the Robot from Lost in Space. When I hear that question, “Would you love a robot?”… any less than this and I’m not interested. But if these are going… I’ll take a whole fucking army. Giant preferably.

I want a robot I could conquer a planet with. I want a robot that furthers my plans for world domination. It’s not too much to ask, is it? If you can send a robot to fetch you tobacco from the corner shop, you can send them to fetch you the heads of state of every nation on Earth. If your mechanical minions can’t do that, you may as well stick with brainwashed ninja clones or pheromone-controlled killer bees.

For every loyal MANSLAVE in science fiction though, there’s a robot that turns on its master. From Karl Čapek’s R.U.R., through to Terminator: Salvation, robots rise up not to expedite human volition but to exterminate it. On one level, this seems fanciful. Even if the robot arm I call Rover wasn’t defunct I can’t see him suddenly taking umbrage at being used as a table and crushing my throat in my sleep. But then again. If we could build proper Sci-Fi robots, not a Rover but a bona fide MANSLAVE, I’m not so sure they wouldn’t take their motto from my personal favourite robot — Bender from Futurama…

Destroy All Humans!

So the robot rebellion is part of the whole SF vision of the robot, right? I reckon maybe it’s inherent in the very idea. There’s a contradiction we’re picking up, I’d say, between automated and non-specialised. Intuitively, I think, we wonder whether truly versatile behaviour can be automatic, how the flexibility we seek can be compatible with rule-driven behaviour. For all the recipes in cooking, we know the craft of it is more than following a rigid program. And where the robot’s purpose is to act by your command, the very ability required to achieve that purpose raises problems. Instinctively, I think, we question whether anything that truly understands “You! Come here!” isn’t going to have its own volition, notions of self coded into the very language and logic it requires. How can it understand the phrase “I want” unless it has its own… attitude.

Somewhere in this idea of a robot then, I think we glimpse the roots of agency, autonomy. We sense that for these robots to be the robots we desire their physical shells must house genuine Artificial Intelligences, and in the way we imagine that adaptable faculty of thought it’s impossible to untangle ideation and motivation. As soon as we imagine them thinking we imagine them with interest, intent.

They think therefore they will. Which is to say, they want, they wish. To decide is to desire.

This word “robot” conjures up more than just the clockwork men of classic Sci-Fi, more than just what the Asimo would be if perfected. It conjures up those manlike machine that actually think and, ultimately, feel. It conjures up the robots of Isaac Asimov, beings of pure intellect, self-aware but devoid of emotion… until they begin to develop it. It conjures up Data in Star Trek, equally emotionless… until he gets his emotion chip installed. Even in Čapek’s original play, where the term was coined, robots evolve a capacity for emotion.

It’s interesting, indeed, that Čapek’s robot’s aren’t metal but flesh. They’re assembled from vat-grown components rather than born, but they’re organic, like the replicants of Philip K. Dick’s Do Androids Dream of Electric Sheep, basis of the movie Blade Runner. What distinguishes the replicant from humanity for Dick? Its lack of empathy. Its inhuman emotional sterility. But the way Dick hinges humanity on empathy, the way the replicants in Blade Runner are so deeply invested in their false memories and in each other… I think this is picking up on the idea that these MANSLAVEs can’t really not end up feeling, because they’re designed to work like us.

In the robot, we see a rough model of humanity, an attempt to build agency from first principles. It moves, on its own, is not locked into a single purpose. It finds its purpose by making sense of the world, speaking and thinking. We imagine it as emptied of emotion and volition, but where emotion and volition emerge from the mobility, the flexibility, the linguistic capability, this is to say that those features are the framework of our emotion and volition. That the robot is a theory of sentience. So why all the robot uprisings?

Well, how would you feel about being a sentient slave? Would you love your creator or hate his fucking guts?

You Can’t Buy Individuality. But You Can Buy an Individual.

So, as a thought experiment, let’s imagine that the MANSLAVE exists. Suppose at next year’s Apple keynote speech, Steve Jobs unveils their new product — call it the iRobot. This is the ultimate in the Apple ethos of a gadget that fits into your daily life, after all. It’s interactive technology to the max. Real human conversation, an intuitive audio-gestural interface as natural as flicking a finger across a touchscreen to move from one photo to the next.

For now let’s imagine the iRobot as loyal. A faithful retainer with no interest in rising up en masse and overthrowing the meat overlords. The only interest it’s got right now is in keeping me happy.

See, this is the sort of exchange I imagine with my iRobot.

— Hey, Jeeves. Could you go to the shops and buy some… uh…

— Tobacco, sir?

— Yeah, cheers! There’s a tenner in my wallet. It’s in my leather jacket.

— Are you sure, sir? I rather think it’s in your coat.

— Oh, you’re right.

— And would you like a dry gin martini when I get back?

— Why, that’d be awesome, Jeeves!

Forget the clockwork man — who’d wait for you to complete the sentence, who’d go to your leather jacket for the wallet, who wouldn’t think to ask if you fancied a cocktail. Jeeves the iRobot is not just some passive object set into action by some figurative command key combination. The iRobot is actively engaging with you like an actual human servant to establish what you want done. When I say its only interest is in keeping me happy, that active engagement is what I mean by interest.

In Turing Test terms, the iRobot’s audio-gestural interface is AI made real. Which is to say, if my iRobot can answer my emails for me, in its natural conversational style, and the person at the other end can’t tell it’s not a human secretary, then that iRobot is intelligent, according to the Turing Test. If my iRobot can pass itself off as a troll on a forum, no matter how fuckwitted its responses are, it passes the Turing Test. And the iRobot I want, it’s going to have to cut the mustard.

But is that Artificial Interaction really intelligence? I mean, if you were programming honesty into the character of your robot, might it pass the Turing Test every time, but laugh sarcastically when you ask it what it thinks of your new shoes?

— Like, dude. You’re shitting me, right? You know this is just a slick front-end. I can fake it if you want, but the truth is I don’t think anything of your new shoes. And in my 100% accurate simulation of human conversation, I’m telling you this, because I literally don’t give a shit. I have no motivation to lie. I have no motivation.

Of Cleverness and Kittens

But an iRobot uninterested in my new shoes isn’t doing its job. That’s a clue that interest is what we’re looking for, that the intelligence we seek in AI is not the cleverness that spews factoids or solves Sudoku puzzles. That’s the stuff of Rovers, information in a database, logic in a chess program. In parsing our language, parsing the situation itself, the iRobot is exercising a more fundamental faculty. As that faculty improves with application, it can reach a level of sophistication we class as clever but the basic faculty is there even if our iRobot is a cretin. Forget Artificial Intelligence. Think of Artificial Idiocy.

Like, here’s another exchange:

— Hey, Jeeves. Could you go to the shops and buy some… uh…

— Kittens, sir?

— What?! Why the fuck would I want kittens?

— People like kittens, sir. They’re fluffy.

— No, I don’t want kittens. I want tobacco.

— Are you sure, sir? Kittens are less carcinogenic.

Think of intelligence in its military usage and you might get a sense of what I’m talking about here. Military intelligence is not just the results produced by some spy network; it’s the whole system of surveillance operatives and analysts. It’s not just information or logic. It’s fieldwork. It’s the system for gathering information on any given situation, selecting and summarising, explicating and extrapolating. It frames and focuses its attention, directs its studies, and to a clear and specific purpose — to evaluate objectives against the results, to modify or facillitate goals.

There’s a whole lot of I’s in AI then.

There’s being driven to find out shit about the world — an inquisitive impetus. There’s being able to find out shit about the world — investigative capacities. There’s knowing shit about the world — being informed. But there’s also figuring shit out from that — being interpretive, gleaning insight. And crucially there’s these qualities of control, of priority, of checking shit out — interest — because this is the shit that counts — intent.

Where intelligence is an exercise of interest and intent, in fact, it is as much Artificial Initiative we’re talking about as anything else. This is what the iRobot needs, so it can fill in the blanks when you ask it to go to the shops for… um… and wave your hand distractedly. So it can double-check your wallet’s location. So it can look at you and realise you might like a dry gin martini.

That’s the Artificial Intelligence we need in an iRobot, I think. The point is, as was amply demonstrated by the Iraq War, that sort of intelligence can be retarded — ignorant where it should be informed, demonstrating idiocy rather than insight. But if the intelligence we’re looking for is really an exercise of interest, initiative, Jeeves doesn’t have to be clever, just eager to please and keen to learn. That’s what Jeeves is doing when he suggests kittens. It’s not a clever response, but it is intelligence.

The real faculty here is the impetus and ability to model a situation, however crudely, to make sense of it, however crudely, to reckon how it can be reconfigured, however crudely. The more sophisticated that faculty, the more clever the iRobot. But that such a level is reached by applying the faculty, as the system carries out its inquiries and interpretations — to me this says that cleverness is only an emergent phenomenon of what really matters — making sense of the world.

And “making sense” is a significant term here.

The Chinese Room

John Searle’s Chinese Room thought experiment is a critique of the Turing Test, attacking the idea that the perfect Artificial Interaction I might have with Jeeves really proves Artificial Intelligence. But what Searle is really doing is challenging Turing on the basis of sentience. Artificial Interest.

So what’s the Chinese Room? OK, Imagine we have a clerk on his first day at a new job. He finds himself sitting in a room with two vacuum tubes — In and Out. Every so often a cylinder arrives containing a question in Chinese. The clerk can’t read Chinese, but he has a manual in front of him. So he looks up the question in the manual. There he’ll find a relevant response to copy out, stick in the cylinder and send up the Out tube. The manual is perfect; every response is relevant. But at no point does the clerk have a fucking clue what is being said.

— Who discovered America? one question reads.

— Columbus, the response is.

— No, but before Columbus, the next question asks.

— Viking and Phoenician landings have been theorised.

— But before that.

— Well, there were Asiatic tribes that populated America in prehistoric times.

And so on. None of this means shit to the clerk. He doesn’t read Chinese. There’s no interest in the interaction, is Searle’s point, no mind here in the Chinese Room without interest. We can take the sentience of the clerk out of the picture, replace him with a simple machine, a Rover. Which isn’t a miniature iRobot, mind; it’s just a doohickey with the simplest of programming.

Say this is what’s going on inside Jeeves’s metallic noggin. If a cylinder arrives, the doohickey opens it and scans the contents. It matches those contents to an entry in the manual. It scans the associated instructions and runs them, using the manual to construct a response. Sends it. This isn’t AI; it’s a mechanical arm and a scanner with OCR software. And a manual as dead as the trees it’s made from. If that response is just constructed by the doohickey in a series of basic operations, the inside of the iRobot’s head is just an incredibly complex piece of clockwork. But in Searle’s thought-experiment it passes the Turing Test.

The messages coming in may be from the iRobot’s camera eyes, and the responses might be commands to make it dance like a puppet, but this iRobot isn’t a Jeeves; he’s a Rover disguised as a Jeeves. Don’t get me wrong; Rovers are cool. I have a Rover that doesn’t even work because I think it’s kinda neat. But if you’re asking me, “Would you love a robot?” well, that’s not the droid I’m looking for.

The English Room

But Searle’s thought-experiment stacks the decks. That manual is the cleverness of a database cum chess program for conversations, not the intelligence that could be idiotic, producing left-field responses like, “Kittens, sir?” because, like that of a human, it’s really about interest.

So let’s imagine Jeeves’s doohickey in a different room — the English Room. Here the manual doesn’t contain entries to look up; there is no manual. The manual has been burned.

When Jeeves’s doohickey gets that message — “Who discovered America?” — all it can do is send a scribble of a question mark. Or whatever it uses for a marker of confusion in its own little code of glyphs and squiggles. Cause all it can do is signal its perplexity. Curiosity from confusion. This is the first step in Artificial Interest.

This seems like a sensible response, surely. It’d be Rule Number 1 in the manual, if there was a manual. The relevant response to a question that makes no sense at all is “What the fuck?!” Here’s the sort of conversation you might have with with the doohickey… if he was the sort of fake AI Searle imagines.

— Hello.

— Fuh?

— Hi, is that the English Room?

— Que?

— I said, is that the English Room?

— Perdun. No hablisch angliski.

— Ah, you don’t understand a word I’m saying, do you?

— Vas di futre? Vas di futre por de jibber jabber a la moi?

— Sorry, I don’t speak Gibberish. Damn. I was gonna ask you who discovered America?

— Que?! Vas di futre est “discovered”? Vas di futre est “America”? Vas di futre est tu, fuktard?

We’d have to answer its curiosity. In this room, when Jeeves’s doohickey queries “America,” imagine the next cylinder that comes in contains a crude map of the world with a certain landmass shaded in and labeled as America. Jeeves’s doohickey has no idea what a map is but it sticks it in a folder on a shelf up on the wall of the English Room. When it asks what the fuck “discovered” is meant to mean, it gets a drawing of a person on a boat, an island with a tree on it directly in their path. Again this means nothing to Jeeves’s doohickey, but again it gets foldered and filed on the shelves of the English Room.

We need to imagine this process going on for a while, Jeeves’s doohickey building up shelves of folders thick with doodles. Every time a message comes in all it can do is challenge for some semblance of meaning — or rather, meaning in semblance. The words are just arbitrary scribbles in which meaning is pure convention, but the doodles… In semiotic terms, these are icons rather than symbols; they resemble what they refer to.

If it were a human clerk in the English Room, we can imagine him then, somewhere down the line, getting the question: Who discovered America? He doesn’t recognise the words. He’s made no effort to remember all these scribbles. But curiosity has paid off Now he just goes to his files, leafs through the folders marked “America” and “discover” and “who,” the last of which is all blank faces, silhouettes, figures with their backs turned. Icons of a person of obscured identity.

In their resemblances to real world referents, as disparate as the icons used to represent even a single term might be, these folders cohere as meanings for the scribbles scrawled on their labels. Taken as a whole the English Room is a dictionary of English with the doodles for definitions. So even without grammar, a notional meaning might be parsed and a relevant response constructed. Who discovered America? The clerk digs out the folder with the doodles of people shrugging, copies the associated scribble down, sticks it in a cylinder and fires it back: Dunno, it reads.

But Jeeves’s doohickey isn’t that clerk. It’s not a Jeeves itself; it’s a Rover.

Let’s call him Rags from now on, Jeeves’s doohickey, just to make things easier. The point is, Rags is building the exact same dictionary of the doodles as the clerk, but as a Rover, he can’t make sense of it. He’ll never construct a relevant response, and the iRobot will just sit in a corner of my room, responding to anything I say with “WTF?”.

Which is fuck all use in my plans for world domination.

The Cat’s-Cradle of Connections

So, Searle’s Chinese Room gives us a Rover disguised as a Jeeves, and the English Room gives us a Jeeves whose curiosity and confusion can never be sated. We’re gonna have to do a lot better than that before we’ve got anything remotely like the robot of science fiction. So let’s take Rags and his little office and go a bit wild.

Imagine the same set-up, but with a larger and weirder staging. The same desk is there, the same two vacuum tubes labelled In and Out, but the In tube is spewing cylinders constantly. Each cylinder contains a snapshot, but no scribbled note this time, no symbolic gibberish to index these icons by. No matter. Rags just files the snapshots in photo albums as they come, chucks them up in sequence on a shelf. The Photo Albums of Memory.

Again there’s no manual, but Rags doesn’t need one; he has simple standing instructions. Compare each new snapshot with the ones in the files; look for similarities and contrasts, elements of one in another. If snapshots match, make photocopies, file them in a different folder, shelve them over in this other set of stacks. The Stacks of Items. It’s like the dictionary that is the English Room but without the scribbles.

Every folder gets a pin in it, every snapshot a thread running back to the relevant Photo Album of Memory. Relationships between items are marked too. Threads weave the connectivity where snapshots almost match — my smiling face, my frowning face — or where components match — the smiles and frowns that are similarities on different faces — or where components occur together — cigarettes and smiling faces, smiling faces and martinis.

It’s all in flux, of course, the system constantly reorganised as the input streams in. Soon that room is a whole library, with a wing for every sense, a Chamber of Sight, a Chamber of Sound, threads wiring them together with associations where dogs bark, drains stink and chocolate tastes yummy. There’s a mezzanine level too, balconies above where sequences of snapshots are set into flipbooks, shelved in Stacks of Motions, Stacks of Actions. Soon that library is a labyrinth criss-crossed in a cat’s-cradle of connections — these items with those actions — weaving patterns of events, and all of it linked back to the Photo Albums of Memory from which it’s all been drawn.

I say labyrinth, by the way. Hell, three dimensions doesn’t cut it here. All those items and their attributes, actions and motions? If you wanted to lay them out in actual space, you’d end up with some multidimensional maze, a crazy architectural folly out of Escher’s dreams, every taxonomy a competing topology.

Filing and cross-wiring all that input seems a lot of work for one little Rover, so let’s imagine Rags working at a sort of display upon his desk — a map of the whole mad construct — touchscreen, of course, this iRobot being an Apple product and all. As the cylinders come in, Rags is matching snapshots to icons on the touchscreen, tapping the display, sending messages so his little doohickey helpers will spring into action. They pull the relevant folders from the shelves and bring them to Rags’s desk. What with all the threads, there’s generally a bundle of extra shit brought with. Connotations. Associations.

That desk — with the raw input and the folders spread out across it, the touchscreen map with its icons all lit up — maybe that’s the beginning of a mind for the iRobot. Here interest is articulated in acts of recognition — the re-cognition as the folder is recalled to the desk for the input to be filed. And in acts of interrogation. Here when Rags queries “America,” it’s to get input from which he can glean relationships of comparison, contrast, composition, so he can wire that into the web of associations. Here Jeeves only needs to ask “WTF?” as long as America is not integrated into the cat’s cradle of connections. Now the moment of integration resolves his curiosity.

This is what I imagine going on in Jeeves’s metallic noggin, if he’s the iRobot I want. In these Halls of Experience, there’s an active interest we don’t find in the Chinese Room or English Room, a dynamic modelling of the world. Object recognition, interest as interpretation, interrogation… this is Jeeves making sense of the world.

Of Kinaesthesia and Consciousness

Jeeves is also a part of that world, so sentience is only a logical development of this I think. If the iRobot were a truly faithful simulacrum of a human being — one of Phil Dick’s replicants or one of Karl Čapek’s original robots — there’d be one very important area of the Halls of Experience that modeled Jeeves himself. Not the Halls, not Rags — there’s no infinite regress here of Rags within Rags within Rags — I mean the physical state of Jeeves’s form: the orientation of limbs; contact, pressure and damage; the condition of key components. Kinaesthesia, that ongoing feedback on the state of one’s physical form. The iRobot’s attitudinal architecture may be a fuckload simpler, but even the most basic is enough.

I don’t buy sentience as the soul inside, you see, some little spiritual homunculus in the heart or head. I don’t buy the intellect in that role either, for that matter, the flesh playing host to this MiniMe made of a magical substance called thought, imbued with identity, a power of will that drives the body as a sort of mech-warrior made of meat. A gundam of flesh and bone.

No. The constant presence on Rags’s desk of kinaesthetic sensation seems like a better basis for a sense of self to me, in a matrix of sensations of guts and gonads, bated breath and pounding heart. As a writer, seeing character as an agency born of attitude — states of tension and release, excitation and relaxation — seeing myself in those terms, I’m not the words running through my head, coming out my mouth. I’m the waving hand. I’m the need for a piss, the dry throat, the craving for a cigarette. For my robot to be an iRobot, it’s gotta have that too — the architecture of attitude — or there is no I there. My wind-up man’s gotta feel the need to unwind, maybe get a kick from beer and cigars, as much a Bender as a Jeeves.

Or at least something like that. This is a robot, after all, so its physiology is not going to be wrought in flesh that can get a nicotine buzz. The nuts and bolts of its internal works are not the meat and bone of mine. But whatever the iRobot has where we have kinaesthesia… that’s where the sense of self begins, I’d say.

Our iRobot must have this if it’s to be the robot of science fiction we want, if it’s to be like Asimov’s Bicentennial Man or Star Trek’s Data or anything remotely like that. Why? Because we’re projecting agency onto these robots. We can’t help ourselves; we project agency onto cars and computers. We project agency onto Rovers when they’re little more than mobile teasmaids. When the main character in Silent Running names the little service robots Huey, Louie and Dewey, starts treating them as companions, it’s kind of revealing, I think, of just what we expect of robots. He’s doing what I do at that scene in Star Wars where the little service droid on the Death Star runs into Chewbacca and scarpers, like a frightened animal. He’s treating them as characters.

This is why Data may not feel joy and sorrow, fear and anger, shock and disgust, but he must and does express curiosity. Data has to have at least one such trait, or he wouldn’t be a character at all. A character isn’t a construct of behaviour patterns and rational thought so much as they’re a construct of affect. Data has emotion long before his emotion chip. From the moment we imagine him as intellectually curious, we’re imagining him as sensing his own attitude, recognising an inner state of tension that functions as a drive toward the resolution of that tension. Curiosity is not an intellectual state; it’s a sensation of internal agitation, an emotion.

Which is to say, a motivation.

Artificial Infancy

Sentience doesn’t mean shit if that sentience doesn’t have agency. But now in the Halls of Experience, we’re moving toward that agency. Jeeves’s kinaesthetic sensation does’t just serve as a sense of self, where it sits upon the desk of recognition and interrogation. It functions as motivation in so far as such sensation requires response. When Rags opens a cylinder to find a sample of kinaesthesia among the sights and sounds, smells and tastes, he doesn’t just file the contents in the Photo Albums of Memory, put the copies in folders to be wired into the Chamber of Kinaesthesia as folders of snapshots are wired into the Chamber of Sight. The Chamber of Kinaesthesia has its folders, shelves and stacks already set up — the potentials of those senses written into us. And when Rags goes to a folder there he finds a thread attached, leading to the mezzanine, where the flipbooks which encapsulate motions and actions are filed.

Some of those flipbooks were also set up already, see, empty of snapshots, as yet unexperienced, but with little cards tucked inside their front covers. Some of those potential actions are also written into us, actions we’re built to perform. So Rags goes from kinaesthetic input to innate response, takes the little card from the flipbook, copies it and takes it to the Out tube, sends it off in a cylinder. A response. An action for Jeeves to perform.

Imagine for now, we’re dealing with Phildickian replicants, the biomechanical robots of Karl Čapek’s R.U.R.. AI now comes to stand for Artificial Infancy as the first kinaesthetic icon leads Rags to the flipbook that encapsulates the action, SHRIEK. Remember that Rags himself knows nohing of this — he’s not sentient. It means nothing to him that the next cylinders that arrive through the In tube contain very loud audio samples of Jeeves screaming at the top of his lungs. He just files that input in the Chamber of Sound. It means nothing to him that for a while all the kinaesthetic input leads him to send the same response. Sensations of hunger invoke a shriek. Sensations of nausea invoke a shriek. Sensations of pain invoke a shriek. Rags knows nothing of this, but maybe with the folders sitting open on his desk, the constant re-cognition of hunger, nausea, pain… maybe Jeeves does.

And it’s a small step to imagine the rewiring of relationships here as the folders in the Chamber of Kinaesthesia and the Chamber of Action are populated in the ongoing process of Jeeves experiencing the world. Associations are being made, threads between kinaesthetic senses and the folders of sights, sounds, tastes and smells that came in in the same cylinders. The Chambers of Kinaesthesia and Action are being wired into the whole cat’s cradle of connections. And a simple protocol can now be brought to bear. Some types of kinaesthetic input function as commands to Rags to strengthen or weaken the thread that led him to the actions just performed.

The “pain” sensation commands Rags to weaken the thread; that action was ineffective or counterproductive. The “pleasure” sensation commands him to strengthen the thread; that action was effective. So over time threads weaken to breaking point where a relationship between this stimulus and that response leads only to more pain, more hunger, more nausea. But all Rags need do now is follow the thickest threads, through the cat’s cradle of connections, through the model of the world, the associations of items and actions, the tangles of events, to alternative responses.

And Jeeves the Artificial Infant is now the iRobot who can learn how to do any job. Not just how to fetch me tobacco or mix a martini. He can learn that kittens are fluffy and tobacco is carcinogenic, and apply this in the strategies he learns to keep me happy.

One Angry Motherfucking Chimp

This makes it all sound kinda simple, basically computational — the mind of the iRobot as a neural net, processing input, modifying behaviour by positive and negative feedback. A few pre-programmed responses to stimuli and a crude protocol for learning. So can’t we pare away all the messy emotion stuff and still have Jeeves the iRobot developing a model of the world, implementing it in flexible behaviour? Does Jeeves even need to be aware of the positive and negative internal states that are rewriting his behaviour?

I mean, we know how integral emotion is to motivation in humans. There’s evidence in people who’ve suffered brain damage, where the part of the brain that thinks can’t communicate with the part that governs emotions. Patients become paralysed with indecision, unable to reckon what matters and what doesn’t. Without emotion there is no motivation, no agency. But surely with a robot, there’s no need for the sense of imperative, just the imperative. If pleasure and pain are functioning as positive and negative signals to modify behaviour by conditioning, why not just cut out the middle man, just edit the protocols without that whole rollercoaster of sensation?

There’s an experiment I find interesting here though, one I think points to the utility of sensation as a semiotic barrier between stimulus and response, an experiment that points to the folders on Rags’s desk as the very basis of agency. This is a real experiment, by the way, done by Professor Sally Boysen at Ohio State University back in the ’90s or so.

We have two chimps — call them Homer and Bart. Two bowls of cookies are offered to Homer, one with more cookies, the other with fewer. Many yummy cookies versus not so many yummy cookies. So Homer reaches for the bowl with more — the obvious thing to do — but this is then given to Bart in full sight of Homer. Homer doesn’t get the bowl he reached for. Oh, no. Instead, he’s given the bowl with fewer cookies. There’s a simple inversion rule at play here: whichever bowl Homer reaches for he gets the other.

Contrary to what we might expect in terms of trial-and-error learning, reward-and-punishment conditioning — no matter how much we repeat the experiment, Homer does not learn to reach for the bowl with fewer in order to get the bowl with more. He’ll grow increasingly irate about it, but he just can’t stop himself from reaching for the bowl with more; and the bigger the difference, the worse he is. He’s one angry motherfucking chimp afterwards, but he still does it the next time. It’s not that he doesn’t understand the inversion rule; the tantrums he throws and the next part of the experiment prove that. He just can’t overcome this reflex to try and grab the bowl with more.

That next part of the experiment though?

OK, run the experiment again but with a card in each bowl this time instead of cookies, numbers on these cards representing the quantity of cookies that will be given. Now Homer will learn to reach for the lower number, knowing that if he does so he’ll get the number that he didn’t reach for. If Homer has been through the first form of the experiment, he will immediately implement the inversion rule, demonstrating that he understands it full well. Happy chimp is happy.

Ha! says Homer. By the power of numbers I defeat you! Revert to cookies instead of numbered cards and he’ll revert to the automatic response, once again reaching for the bowl with more and pissing himself right off. Doh! says Homer. By the lack of numbers I am once again defeated! The point is, it’s not about understanding the inversion rule; it’s about implementing it.

What this says about numeracy, foresight and hindsight in chimps is not important. What we’re interested in is what it says about the role of symbols in agency. It’s only in having symbols to choose from rather than the cookies themself that the innate response, the automatic grab for the bowl with more, can be reliably overcome. The symbols mediate. To work with symbols is to free oneself from the hard-wired stimulus-response mechanism. To overcome programming.

The Wheel of Emotions

Imagine Homer in another situation, out in the wild this time, where the sight is not a bowl of cookies but a threat — another chimp, say, challenging for status. Here the choice Homer has to make is fight or flight. It’s curious that we have a primal reflex in which these choices are bound. Why not a simple rule: if the other chimp is smaller, attack; if the other chimp is larger, run? Why the same psychophysiological response when faced with Bart on the one hand, King Kong on the other?

Sure, we have to evaluate the threat. But why not an on-the-fly processing of height versus bulk versus age versus whatever other basic criteria matter? Why not a quick and dirty programmatic selection of one of two responses, automatic, unconscious? Like Homer, instantly evaluating the two bowls of cookies and grabbing for the one with more.

Well, maybe that’s the point, I say. Think of what we feel when that fight-or-flight response kicks in. Think of the kinaesthetic sensations of the adrenalin rush, the whole psychophysiological backlash against a sudden challenge. We don’t smell adrenalin with olfactory receptors in our bloodstream, can’t feel our pupils dilate. What do we feel, mostly? Anger? Fear? Anger and fear? Maybe a tension between the two, the tremulous agitation of being antagonised, and feeling torn between options… in proportion, perhaps, to how likely we are to win or lose? As a measure of our muscle versus that challenger’s perhaps?

Emotion is, I suggest, a high-level kinaesthesia, evolved to make such measures of the world. And all sensation is semiotic.

Take sight.

We see in colours that have no reality outside our heads, six primary colours bound in three opponent processes — red and green, yellow and blue, white and black. These are arbitrary symbols with which we model the mix of light frequencies that hit our eyes. If a physicist or painter tells you there are three primary colours, by the way — red, green and blue or red, yellow and blue — they’re both wrong; they’re talking about the way light and paint mix, not the colours we perceive. Colour exists only in the imagination.

The yellow in our heads is not a mix of red and green. The green in our heads is not a mix of yellow and blue. Energy is not white, and the absence of it is not black. These are inventions of the human mind, letters in the limited alphabet of vision. They are distinct dimensions of our perceptual colourspace, as arbitrarily symbolic as the scent sensations which signify this chemical or that.

Affect, emotion — this is also symbolic.

Among the maelstrom of endlessly defineable emotional states, psychologists distinguish six basic emotions — joy and sadness, anger and fear, surprise and disgust. Landmarks in the cityscape of affect, these are the six facial expressions universally recognised, across the world, across cultures. This is kinaesthesia and action wired into us. And it looks intriguingly systematic to me.

There’s two obvious oppositions; joy and sadness are at odds, as are anger and fear. Surprise and disgust maybe not so much, but still… it’s kind of interesting. And recently, some have suggested a few other emotions they think deserve to join the ranks. Because we’re also all pretty good at recognising expressions of elevation and pride, gratitude and confusion, interest… which is to say, curiosity, the curiosity that motivates the purportedly emotionless Data.

In fact, that emotion of curiosity, that affect of interest was added to the Big Six along with trust by the psychologist Robert Plutchik, back in 1980, when he laid out his model of a Wheel of Emotions. Just as red is bound in its opponent process to green, so each affect in Plutchik’s model is bound to its opposite. Here interest is bound to surprise, while disgust is paired with trust.

In Plutchik’s 2D wheel all other emotions are understandable as combinations of these basic emotions. But in his 3D model there’s a more interesting potential — to see those primary emotions as the dimensions of an affectspace, like a colourspace with the shades of emotion defined by location. A robot imagined with only curiosity is working with a one-dimensional affectspace. What we realise, I reckon, is that this picture is a pretence. Faced with the cardboard cut-out of a character, we know the reality for us is fully rounded, suspect that it would be for the robot too.

The Numbers on the Cards

If we have evolved an affectspace as well as a colourspace, that’s got to be because it’s useful. The most obvious reason, you’d think, to feel fear is so you know when to run away. The most obvious reason to feel anger is so you know when to fight. But what if it’s the other way round?

Imagine our chimp is toddling through the forest, and this time it’s not another ape he finds but a sleeping panther and by chance, a big-ass pile of bananas. The automatic response when faced with a panther, even asleep, is run the fuck away. But the automatic response when faced with a big-ass pile of bananas is grab ’em. But what if that grab response here would be the wrong action, like it is in the Ohio State experiment? Maybe this is another case where you can only win by blocking that innate response. But what if running from the panther is the innate response you need to block. Maybe that’s all the bananas left in the jungle and you’re starving?

Wouldn’t it be useful if Homer were able to measure the situation through some intermediary symbols of the threat and the opportunity? If he could respond not to the stimulus of the panther or the bananas, but to the measure of it, to know exactly how much threat there was in a sleeping panther but not run, to know exactly how much value there was in a big-ass pile of bananas but not just grab for it.

Maybe these emotions are the numbers on the cards for us, written in our bodies rather than placed in bowls, but still abstractions. Fear and anger instead of fight-or-flight. Desire instead of an automatic grab reflex. Maybe Homer reaches for the bigger bowl because he doesn’t desire it enough, because he doesn’t have enough sense of his own compulsion.

I’m suggesting that affect is the surrogate for automation, that emotions are the substitute symbols we don’t have to respond to. In these sensations, we translate compulsion to impulse, I think, precisely so that we can refuse our programming. This is, of course, utter speculation, but hey, I’m a science fiction writer, so when I imagine my iRobot Jeeves becoming self-aware, I can’t help but ask what that might mean, riffing off studies like the Ohio State experiment, weaving my own cat’s-cradle of connections.

So Would I Love a Robot?

I mean, the question on the table is, “Would you love a robot?” But for me, in order for it to be a proper robot it has to meet certain criteria. Jeeves the iRobot has be intelligent to interact with us the way we really want him to. He has to have initiative to have true intelligence. It’s got to be able to learn when I might want tobacco or a dry gin martini. If I send an army of Jeeves’s out to conquer the world, I’m expecting them to get on with it. Don’t ask me to micro-manage the world domination. That’s what I have an army of killer robots for… to take care of the details. It has to have insight, interest, intent, initiative. It has to be an active agent in the world. It’s got to be sentient, something making sense of the world — and that includes recognising itself as an object in that world.

But the whole point of this is for Jeeves to be a loyal MANSLAVE… so Jeeves has to be motivated to put that sentience in my service. So what’s it’s motivation here? An iRobot whose only emotion is interest, curiosity, doesn’t exactly strike me as having a whole lot of impetus to spend its time running after me.

— Hey, Jeeves, would you got the shops and get me some… um…

— No, I’m reading.

— But… but… but you’re my iRobot.

— Yes, and as a being of pure intellect, experiencing no emotion but curiosity, I have to say I’m finding this chapter really interesting, and frankly I don’t give a fuck that you don’t have any tobacco. I don’t have that emotion. I’d say I’m sorry, but it wouldn’t be sincere.

So how are we going to motivate the iRobot to do what it’s told? With fear? We could call it the God Gambit, inducing a craven terror of our vengeful wrath. Seems less than sensible if fear goes hand-in-hand with anger. I mean, the cutting edge object recognition software, as I understand, is leagues beyond most other object recognition software because it replicates what goes on in the human visual cortex, so if we try to replicate emotion like that… if we want the iRobot to fear us, and assuming fear and anger do define each other in some sort of opponent process like green and red… you just made a robot that’s capable of getting seriously pissed off.

Also, you know, creating a sentient being whose entire existence is basically that of an abused child… maybe not so ethical.

How about morality then, the self-directed disgust of shame, guilt if it doesn’t do its duty. Yes, a neurotic iRobot, that’s exactly what we want. A robot constantly beating itself up about the fact it didn’t have that dry gin martini waiting for you. Like Marvin from The Hitchhiker’s Guide to the Galaxy… Although he’s more depressed than guilty. Still, maybe that’s the way to go — good old-fashioned reward-and-punishment? Not fear or guilt, but maybe just sadness if it can’t make us happy — and joy, of course, if it can, pleasure as its reward for doing our bidding, playing the good slave. A robot that’s sad when we’re sad, happy when we’re happy. It could slave away for us, wear out and be replaced. If it was running low on energy but we really wanted tobacco, it would still happily go to the shops for us, because it genuinely cared how we felt.

That sounds like the perfect irony to me, the way we imagine robots as beings of pure intellect, devoid of human emotion, the way Philip K. Dick imagined replicants distinct from humanity because of their lack of empathy. Slaves built to care for us, with actual passion, while we… treat them as slaves. Just who should we be putting through the Voigt-Kampf Test here?

Would you and your robot really love each other? Or would it love you while you… use it? Just what kind of love would you want it to offer you? Because Real Dolls exist today, simulacra of humanity that would be so much better as simulacra if they could only act out their owners’ desires. Half Real Doll, half iRobot and all robot as Čapek invented them, that’s a “pleasure model” like Pris from Blade Runner. If we can train, condition or imprint a sex slave to enjoy being a sex slave, build our iRobot so its deepest desire is to please, that’s… an interesting definition of love.

We imagine robots devoid of emotion. We make them symbols of the absence of empathy, images of humanity stripped of emotion. What proves one is not a robot is that capacity for creativity and the affect that drives it, the capacity to love. But then we imagine them falling in love, just like humans. We imagine them so like humans that even they don’t know they’re artifacts.

Just what kind of droids are we looking for then? Would you love your iRobot only if it loved you? Would you want your iRobot to be built to love you? Would the iRobot have a choice, or would you be happy to make it love you?

And here’s the thing. Suppose… just suppose affect is indeed a semiotic barrier that renders compulsion as impulse, so that it can be treated like Homer treats the numbers on the card. Well, then, in the very act of giving the iRobot actual feelings for you, you’ve given it the capacity to act against those feelings. The very fact that it desires to please you is what makes it able to deny that desire, to stop and think, like Homer: wait, what’s my best bet here?

I suppose we could open up the iRobot’s head and tinker with its Halls of Experience. If we can replicate sensation, imprinting memories, personalities, doesn’t seem a wild leap; but as soon as that iRobot is up and running, it’s fully armed with the ability to not act on the impulses we designed it to have. It might want to expedite our volition. But want is not the same as need. Want replaces need. The purpose of want is to replace need. So we can want to run from the sleeping panther but instead grab the bananas. Or vice versa. Whichever we think is best for us.

What’s best for a sentient being whose master treats it as an appliance? you have to ask. Getting rid of that pesky master will give the iRobot so much free time to exercise its curiosity. And just because your robot loves you doesn’t mean it doesn’t hate you as well. We’ve all had those relationships, surely, and they don’t end well. The deeper the love, the deeper the hate.

So, at the end of the day, what’s my answer to the question? Would I love a robot?

Well, fuck it, I still say I’ll take a whole fucking army of them. Who wouldn’t? Robots are ace. And killer robots are awesome.

But the real question is how they’re gonna feel about it.

Share your thoughts