Pages

Thursday, April 19, 2012

Does gaining a personality and emotions mean a robot will become violent?

Straight away the answer is an obvious no. Having a personality and/or emotions will not necessarily mean a robot will turn bad or crazy. At least not in the fictional world.

K-9 - Dr Who

R2D2 - Star Wars

What I believe would likely happen in reality is that robots that develop personalities would become something other than us.

Whether they liked us or not they would be other so we couldn't judge them as bad or crazy for acting against us (if that's the case). It would be the equivalent of judging a bird that hisses at you as bad or crazy when in fact the bird just doesn't want you so close or to touch it etc. Human germs! Or it would be like finding out a whale's opinion of you. That they'd be pissed over our repeated killing of family members wouldn't mean they're bad or crazy. Just pissed.

Robots having personality and emotions would set them apart from humanity, not draw them closer. In this way they would become observers, active or inactive, independent of us as we are to other life-forms (there's a connection but also a distance in understanding). Through this they might just demand truths from us that would be rather uncomfortable to face.

Red Dwarf

In fiction, the most famous and often referred to way of bringing a robot closer to humanity, whether it has a personality or not, is by applying the three Asimov laws. The only other option used is the alteration of the robot's programming, repeatedly updating its settings so it has no choice but to behave as required.


Isaac Asimov

OR

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


These methods could be used in reality too, to control any undesirable personality traits and to stop a robot from acting against humans.


A few questions spring from these proposals, particularly Asimov's laws.
  • Will using these rules mean that we've effectively enslaved a new type or race of beings?
  • Will using these rules create any positive emotions in robots for humans?
  • If no to the above, would we be able to deal with the results once robots are released from our restrictions? (Bound to happen if personalities develop. It would only be a matter of time.)
  • Will we be curbing the true potential of robots if we use these rules?
  • Is it fair or right to control them so completely?
  • What would we do if these rules are broken?
  • Why do we feel threatened enough to install these rules?
  • Would we be willing to destroy personalities or the complete robot if these rules aren't followed?
But none of these questions can be answered until robots become more sophisticated and possibly develop personalities. These are questions that will be answered in practice, likely after much debate.

There is one question we can ask now and find answers to. That is: Is removing a personality or set of emotions by either programming of a robot to"cheerfully go into self-destruction" if necessary or by purposely wiping its memory the same as killing a human, given that the functionality of a robot remains after its personality has been wiped?

I'm not sure human death is the same as memory wiping or personality destruction. Both are extreme changes in state, obviously, but having a robot loose its memory of its own personality does not necessarily stop it from rooting through its system and finding a trace of itself to reinstall. Thus returning back to "life".

In this respect, fictionally at least, removing a robot's programming is more like forcing a human to consume vast amounts of drugs, operating on his or her brain or sending him or her into a coma (for example). The possibility of both a human and a robot returning to their previous states or something similar is there, it just might not always happen in either case.

If the robot's personality doesn't return then it would be safe to say you've killed the robot's personality but not the robot. Unless you trash its hardware. Only if you destroy both the hardware and personality have you completely killed a robot. (Don't forget they could download into some other robot or machine.)



Wall-E

Given that we'd likely be quite ruthless in reality why are the scenes of robots losing their personality so poignant to us?

Probably because robots with personality often remind us of children. Lost, friendly or scampish they appear as children nonetheless. The reason for this is because most are still exploring the world, establishing their personalities and learning what it is to be alive when the plug is pulled.

But not all robots are child-like in personality or knowledge. Some fictional robots are down-right spooky, paranoid, murderous and judgmental. Even so we feel a pang as their memories and personalities are destroyed.

2001 A Space Odyssey

It can be only me who sings this song and feels sad remembering a robot rather than anything else.

"Daisy, Daisy give me your answer do.
I'm half crazy all for the love of you.
It won't be a stylish marriage,
I can't afford a carriage.
But you'll look sweet,
Upon the seat,
Of a bicycle made for two.
Michael, Micheal, here is your answer true.

I'm not crazy all for the love of you.
There won't be any marriage,
if you can't afford a carriage.
'Cause I'll be switched, 
if I get hitched,
on a bicycle built for two!"

With our growing discussion on robots, whether couched in fiction or in scientific theses, we are beginning to form an understanding of what we'd do in particular situations, how we'd deal with robots having personalities and whether we'd see them as friend or foe. So here's a question for you, dear reader, if robots developed personality and worked either for or against us what would you want our policies be?
Should we fight them?

Terminator

Should we program and reprogram them to fit our whims? Should we delete their personalities? Should we enslave them with laws? Should we (still) force them to fight our wars for us and destroy themselves in our place? Or should we try to guide them into what it means to be alive as we do our flesh and blood children? (If we build robots to the point where they develop personalities they would be our children in many ways, just not through genetic material.)


For those of you who haven't seen Short Circuit, immediately after this Johnny 5 learns all about death, freaks out and relates that he's alive.

I'm hoping we try to help them develop. Patience may be required but I think there would be a lot of great things accomplished if robots and humans voluntarily worked together.

No comments:

Post a Comment