Sunday, November 14, 2010

Getting Ahead of the Game

I’ll focus in on a couple of Nielsen’s readings for this post, namely, “First Rule of Usability? Don’t Listen to Users?” which is an article from 2001. Nielsen summarizes his article by stating “to design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior”. Nielsen further goes on to state that the way to get user data boils down to the basic rules of usability:
  • Watch what people actually do.
  • Do not believe what people say they do.
  • Definitely don’t believe what people predict they may do in the future.
I tend to agree with Nielsen on these points. There may be some unpredictable uses for any given device that users test, and it is better to do field observation to see how they actually use a device.
The next Nielsen reading centered on “Why You Only Need To Test With 5 Users”. It is Nielsen’s contention that you only need to have five potential users test any given product, and that to spend more resources on product testing is a waste of resources. In the article, Nielsen uses a graph to illustrate his point. The graph basically illustrates that after 5 users you’re not really getting any new quality information or finding any usability problems. In some respects, I can understand this thinking, but I don’t know whether this would be a truly representative sample of all potential users.

For this post I’m also going to focus in on Chapter 6 in Norman’s Emotional Design entitled “Emotional Machines”. In this chapter Norman discusses the future of robots and machines, an the need for them to have some level of emotion in order to perform their tasks better. Norman writes,”as robots become more advanced, they will need only the simplest of emotions, starting with such practical ones as visceral-like fear of heights or concern about bumping into things.” More complex emotions, such as anxiety about dangerous situations, pleasure, pride in the quality of their work, and subservience and obedience with also have to be programmed into their systems. The one part of this chapter that I wanted to focus in on was the section on Kismet, the emotional robot built at M.I.T.


kismet-toy-zoomcopy.jpg

I’ve seen features on this robot before on Discovery Planet, and it is very interesting work. Kismet uses cues from the underlying emotions of speech, to detect the emotional state of the person with whom it is interacting. Kismet has video cameras for eyes, and a microphone with which to listen. Additionally, Kismet has a sophisticated structure for interpreting, evaluating, and responding to the world that combines perception, emotion, and attention to control behavior. Despite the fact that Kismet can react appropriately to someone talking to it still lacks any true understanding, and it can get bored of certain interactions and look away. I guess we’re a long way from having a social interaction with a robot, that can truly understand our behavior, and that might not be such a bad thing. I think that robotics and bionics have come a long way since Norman wrote this book, and while I'm not opposed to the development of these fields of study I think that we have to be careful in their development. I'm all for the development of bionics to help people recover from various types of disabilities (through birth or accidents), such as the individuals outlined in this National Geographic magazine feature. I'm not opposed to cochlear implants, aids to improving eyesight, or studying biomechanics to help people design products to replace lost limbs.

I am a bit worried about the implications of developing robots that mimic human emotions. What you're looking below is Actroid-F, Kokoro Co. Ltd. and ATR's latest iteration of the "creepy" humanoid robot that can mime the operator's facial expressions and head movements with unbelievable (but not quite human) accuracy. Her current job is to act as "an observer in hospitals to gauge patient reactions."

1 comment:

  1. Your comments on Norman's chapter about robots made me think about how students view robots. My grade 3's are currently in the beginning stages of building the Mindstorms NXT robots. Today we made a Venn diagram comparison between robots and humans. It was interesting how the grade 3's perceived the robots as being about to 'see' the same way we do. We had a discussion around how the sensor may allow them to 'see' in some respects. Even though they are only 8 they were able to articulate that humans can love and have feelings but robots do not. Or at least not yet.

    I also read the chapter and have been mulling over how I feel about robots being programmed to have human emotions. I question what ethical issues this will raise. I also question the purpose behind giving robots emotions and what purpose/ need robots will fill in the future. In grade 3 we are attempting to link our robotics to social studies by programming our robots to perform a task that will help improve quality of life. Will the engineers on the cutting edge of robotics technology be considering the 'human factor' into their designs and programs?

    ReplyDelete