Isaac Asimov: “Little Lost Robot” (1947)

In general, I’m no fan of sci-fi, although I don’t avoid it. It just never did it for me. I was more in the fantasy realm after an childhood engaged in avid bouts of Dungeons & Dragons (and for some reason it seems that sci-fi and fantasy fans are never one and the same). But I had always heard of two great sci-fi short story authors who were considered great exemplars of the genre in which they wrote: Philip K. Dick and Isaac Asimov. So I was eager to sample a work by each.

From the start of Asimov’s “Little Lost Robot” I knew that I would like it, and that the fact of it involving the future and robots was largely incidental and not requisite to the enjoyment of the story. It was sauce on a great piece of meat, not sauce used to hide the poor quality of the meat it dressed. Asimov writes in an assured, if old-fashioned, manner and immediately draws the reader in by starting in medias res. As so many great detective stories do, this begins with experts being called in to solve a problem. In this case, the experts are a pair of mutually-antagonistic robopsychologists, including ornery Susan Calvin (a recurring Asimov character), who are tasked with finding a lost robot.

“Little Lost Robot” introduces Asimov’s now-famous “three laws of robotics,” established by scientists to protect humans against their own creations. The First Law is that a robot cannot harm a human, and must intervene if a human is in danger. The plot hinges upon the fact that a mining corporation has tinkered with a small group of robots and trimmed the first law, cutting out the second clause, because robots were interfering with the human miners, who needed to be exposed to blasts of radiation, too brief to be harmful, in order to conduct the mining. One of the robots who was tinkered with has suddenly disappeared, and now seems to be hiding among the 62 other physically identical robots who work for the corporation. Calvin fears that the robot could possibly endanger a human, while still following the first half of the First Law, by not deliberately harming a human but also not intervening to prevent harm. The robopsychologists must develop a test to determine which of the 63 robots is the outlier, without having to destroy them all and cost the corporation millions.

It’s a great setup, and I should pause for a moment to express just how influential Asimov was as a writer. He invented the term “robotics,” as the study of robots. He didn’t invent robots, of course, but so many of the ideas we have about robots, particularly in fiction, emerged from his pen. Just like George Romero’s Night of the Living Dead which established “rules” about how zombies are depicted in fiction, Asimov established, back in the 1940s and 50s, some by-now familiar tropes about robots—specifically, he foresaw just how much we would rely on them, and what could logically go wrong with our over-reliance on them in the future.

After interviewing the miners, Calvin finds one who, in a fit of anger, told the robot in question to “get lost,” which was meant metaphorically but was taken literally by the robot as an instruction. So the robot hid among the other robots. Asimov adds psychology to the robots, so that Calvin begins to believe the robot’s sense of superiority to humans keeps it in hiding, priding itself on outwitting them. She arranges for a human to be put into what appears to be a dangerous situation, in which a weight is falling upon them. Each robot with the second law programmed in place will be compelled to rush to save the human, even at risk to themselves. While Calvin is on the right track, the renegade robot is ahead of her. He has spoken with the other robots, rationalizing with them that they would be unable to save the human and thus die for nothing.

The situation is multifaceted, with an engaging mystery that has profound implications. Eventually Calvin manages to outsmart the robot, but not before Asimov’s main theme has been sounded: if robots become too intelligent, and if we rely on them too much, a) they might become too independent, and b) they might take over. It’s the theme of countless films, from 2001: A Space Odyssey to Terminator, but Asimov was the first to employ it, decades before it was a cliché. The implications of robots becoming too human is still present for today’s scientists, and our over-reliance is well-documented. We no longer bother trying to remember things, as our computers remember for us. If our computers one day were to become hostile to their “masters,” or were to suddenly forget everything we’ve saved in them—well, a phrase about a creek without a paddle comes to mind.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Latest News & Events

Summer 2014 Issue of New Haven Review Now In

An announcement to all who have so patiently waited.  The Summer 2014 issue ofNew Haven Reviewis now in and starting to ship.  Featured in this issue... “Meditation on the Shore (Ocean City, NJ)” by … [Read More...]

The Latest Review

Summer 2014 Issue of New Haven Review Now In

An announcement to all who have so patiently waited.  The Summer 2014 issue ofNew Haven Reviewis now in and starting to ship.  Featured in this issue... “Meditation on the Shore (Ocean City, NJ)” by … [Read More...]

Set your Twitter account name in your settings to use the TwitterBar Section.