Friday, December 17, 2010

Portfolio Design Rationale: Mission Statement or "Better Late Than Never"

First off, I can't believe I missed this course requirement from the course outline. As a result, I'm kind of writing this backwards, after I've already made a bunch of postings. At any rate, here's what the intent of this blog was all along.

Portfolio Design Rationale: Mission Statement
This e-portfolio has three main purposes. First off, it will be a place to post information related to my group's project. I've been able to upload a few links to uploaded documents using a website called Scribd. One of the drawbacks of Blogger is that it doesn't allow you to post PDF or Word documents (or PowerPoint presentations for that matter), you have to use a third party to do this. I've been able to post some elements of our design project, but I freely admit it is a bit scattered around the Internet. Some of this is out of my control, since the main prototype of our product is housed on Stephen's webspace on the CBE project server.  The second purpose of this blog will be to post my thoughts and reflections from the course readings. Another purpose for this blog will be to try new things as a 21st century learner and become more comfortable with digital technologies and try new things. I think over the course of the semester I tried using different Web 2.0 tools. It's something that I'll continue to do in my future Graduate classes.  

The target audience for this blog is for my recording and reflections. I invite my professor and classmates to view this blog and join me on this journey through the course. This blog is meant to document my goals, challenges and reflections on my learning journey. 

My overall goal with this work is to develop a deeper understanding of user centered design and course content, enhance my understanding of digital technologies and grow as a teacher and learner.

Some thoughts on Norman's Epilogue for Emotional Design... 

I want to begin my last response post by saying how much I have enjoyed Norman’s book. It constantly made me think, make connections to other things that I have read or seen, and nod my head with approval and understanding. I am passing it on to my sister,  I think she’ll enjoy it because of her background in interior design and architecture.
I would like to emphasize a few concepts that Norman discussed in this chapter, specifically personalization and customization. Norman asks questions like, “how can mass-produced objects have personal meaning?” and yet for many people that already do, without the need for any customization. Some people love objects that have been mass-produced, and they form personal attachments to them. Take for example, Pottery Barn, or any other company similar to them. They sell what appear to be unique curios, but really many of your friends and neighbors might have the same apothecary table. Are there truly unique products out there in our mass-produced mass-consumed culture?
Although there are customization services available to consumers for some products, there really are a fixed number of choices, styles, colors and materials. I hope that Norman’s idea of “mass customization” becomes more commonplace, and will extend to everything. Currently, computer manufacturers like Dell employ a “just-in-time” manufacturing model. Items are only manufactured after they have been purchased, so there’s no stockpile, which, in turn, reduces the cost of inventory. On a very related note, in Thomas Freidman’s book The World is Flat, he discusses in great detail the manufacturing and supply chain operation of Dell’s operation. Here are some interesting facts about Dell’s operation which I am taking from Friedman’s book (pages 515-519):
  • Dell has six factories around the world: Limerick (Ireland), Xiamen (China), Eldorado do Sol (Brazil), Nashville (Tennessee), Austin (Texas), Penang (Malaysia)
  • Orders are sent by e-mail to the various factories
  • parts needed for every individual order are sent to supplier logistics centers (SLCs)
  • around every Dell factory there are SLCs, owned by the different suppliers of Dell parts
  • in an average day, Dell sells 140,00 to 150,000 computers
  • those orders come over the phone or through Dell’s website
  • as soon as the orders are taken, the suppliers at the SLCs know about it
  • every two hours the Dell factories send an e-mail to the various SLCs telling them what parts are needed and the quantity
  • parts are delivered 90 minutes later
  • all parts are unloaded in 30 minutes and bar codes are entering into a tracking system
  • Dell has multiple suppliers for most of its key components
In his book, Friedman goes through all of the various parts and components and their origins (where they were manufactured) because he wrote the book on a Dell Inspiron notebook and wanted to know all of the global connections that made this piece of technology “tick”.
Norman also discusses how we, as individuals, are designers in our every day lives, because we manipulate the environment in which we live to suit our needs, we select items to own, we build, arrange and restructure. Through our designs, we transform houses into homes, spaces into places, and things into belongings. In my everyday life I maintain a personal website and weblog for both myself and one for my son (my wife has one too, although she doesn’t update it very often). These are also, according to Norman, to be personal, non-professional design statements. But how individualized can they blogs be when they come with pre-packaged templates that everyone uses? Where is the personal customization? This is something that I wish blogging services like Blogger would offer, far greater customization and personalization of blogs. This blog does have some elements of customization and if the user has greater knowledge of CSS they could really make the template their own. It is fairly user-friendly though with drag and drop features and elements that you can choose to include or not include on their blog.

Monday, November 22, 2010

The Future of Robotics

There seems to be a few different forms that robotics could take. Currently there are humanoid robots (like ASIMO), modular robots, educational toy robots, and sports-related robots (hopefully culminating in real-life Cyberball at some point). I find that the path that most robotics research seems to take (or maybe this is just the type of robots that get the most attention in the media) seems to be the humanoid type of robot. A robot that is meant to be able to mimic or interpret human facial expressions, with the desire being to one day be capable of real human-like emotions. Robots are being built to imitate human expressions, to think and to respond to stimulus. At some point, we will develop robots that will able to not only see, hear, touch, and smell, but also feel a range of emotions.


Many robotics engineers have being influenced in their programming by the writings of science fiction author Isaac Asimov. Asimov is famous for creating a series of laws that must be developed some time in the future for the governance of robotic emotions and behavior patterns. Initially, Asimov had three laws, which can be summarized as follows: 
 

First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In his science fiction writings Asimov wrote many short stories that played upon the "loopholes" that existed in his original three laws of robotics. Even as early as 1950, Asimov was making changes to the laws of robotics. By 1985, Asimov had changed the laws of robotics once again to reflect these changes. This new set of laws is listed below:

Asimov's Revised Laws of Robotics (1985)
Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
These laws were further modified to an extended set of laws.

An Extended Set of the Laws of Robotics
The Meta-Law A robot may not act unless its actions are subject to the Laws of Robotics
Law Zero A robot may not injure humanity, or, through inaction, allow humanity to come to harm
Law One A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law
Law Two
  • A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law.
  • A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law 
Law Three
  • A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law
  • A robot must protect its own existence as long as such protection does not conflict with a higher-order Law
Law Four
A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law
The Procreation Law
A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics

So what's the point of all of these "laws"? They do serve as a guide for robotics engineers when they are programming their robots, that they must think of humanity as a whole. Science fiction definitely does have its share of pessimism with regard to robots and their capacity to be beneficial to mankind. Many science fiction storylines deal with the "flaws of humanity" and the people responsible for creating the machines being flawed because their creators have passed on their flaws to their creations. We also have to think of the social implications of robots that have emotions. Will these future robots need or demand the same rights that humans enjoy? Again science fiction has dealt with these issues, and when we were discussing this in class last week I remembered an episode of Star Trek: The Next Generation that dealt with this very issue. As you may or may not know, in this Star Trek series there was a character named Data, who was a sentient android that was created by a scientist named Dr. Noonien Soong. In the episode "The Measure of a Man" they held court proceedings to determine Data's legal rights as there was a scientist that wanted to disassemble Data to learn how he worked and attempt to duplicate Dr. Soong's work on the "positronic brain" (you got to love Trek-jargon). The arguments in the episode tended to center around Whether or not Data was Starfleet property or whether he should enjoy rights as an autonomous individual. Although it was more like a "courtroom drama" and an episode that was probably really cheap to make (since there were no special effects per se in the episode) it is one my favorites from that series because of the ideas and issues that are raised in it. I'm posting the entire episode below from YouTube.
In Part 1, you can skip to about the 5 minute mark, and not miss too much.



Here is Part 2 of the episode:


Part 3:


Part 4:


Part 5:

Sunday, November 14, 2010

Getting Ahead of the Game

I’ll focus in on a couple of Nielsen’s readings for this post, namely, “First Rule of Usability? Don’t Listen to Users?” which is an article from 2001. Nielsen summarizes his article by stating “to design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior”. Nielsen further goes on to state that the way to get user data boils down to the basic rules of usability:
  • Watch what people actually do.
  • Do not believe what people say they do.
  • Definitely don’t believe what people predict they may do in the future.
I tend to agree with Nielsen on these points. There may be some unpredictable uses for any given device that users test, and it is better to do field observation to see how they actually use a device.
The next Nielsen reading centered on “Why You Only Need To Test With 5 Users”. It is Nielsen’s contention that you only need to have five potential users test any given product, and that to spend more resources on product testing is a waste of resources. In the article, Nielsen uses a graph to illustrate his point. The graph basically illustrates that after 5 users you’re not really getting any new quality information or finding any usability problems. In some respects, I can understand this thinking, but I don’t know whether this would be a truly representative sample of all potential users.

For this post I’m also going to focus in on Chapter 6 in Norman’s Emotional Design entitled “Emotional Machines”. In this chapter Norman discusses the future of robots and machines, an the need for them to have some level of emotion in order to perform their tasks better. Norman writes,”as robots become more advanced, they will need only the simplest of emotions, starting with such practical ones as visceral-like fear of heights or concern about bumping into things.” More complex emotions, such as anxiety about dangerous situations, pleasure, pride in the quality of their work, and subservience and obedience with also have to be programmed into their systems. The one part of this chapter that I wanted to focus in on was the section on Kismet, the emotional robot built at M.I.T.


kismet-toy-zoomcopy.jpg

I’ve seen features on this robot before on Discovery Planet, and it is very interesting work. Kismet uses cues from the underlying emotions of speech, to detect the emotional state of the person with whom it is interacting. Kismet has video cameras for eyes, and a microphone with which to listen. Additionally, Kismet has a sophisticated structure for interpreting, evaluating, and responding to the world that combines perception, emotion, and attention to control behavior. Despite the fact that Kismet can react appropriately to someone talking to it still lacks any true understanding, and it can get bored of certain interactions and look away. I guess we’re a long way from having a social interaction with a robot, that can truly understand our behavior, and that might not be such a bad thing. I think that robotics and bionics have come a long way since Norman wrote this book, and while I'm not opposed to the development of these fields of study I think that we have to be careful in their development. I'm all for the development of bionics to help people recover from various types of disabilities (through birth or accidents), such as the individuals outlined in this National Geographic magazine feature. I'm not opposed to cochlear implants, aids to improving eyesight, or studying biomechanics to help people design products to replace lost limbs.

I am a bit worried about the implications of developing robots that mimic human emotions. What you're looking below is Actroid-F, Kokoro Co. Ltd. and ATR's latest iteration of the "creepy" humanoid robot that can mime the operator's facial expressions and head movements with unbelievable (but not quite human) accuracy. Her current job is to act as "an observer in hospitals to gauge patient reactions."

Catching Up Part 3

I've found it very hard reading Vicente's The Human Factor. I tend to find his central thesis a bit repetitive, essentially if you don't account for the human factor in your design, whatever can go wrong will go wrong (with apologies to Murphy's Law). My posts on Vicente's work are a requirement for this course, and since I'm being paid to do a book review (nor did anyone ask about my thoughts on the book), I'll continue on with my obligatory reading responses. I'm going to try and kill three birds with one stone here, and write some responses to Chapters 5-7 today.


Chapter 5 is entitled "Minding the Mind II: Safety-Critical Psychology", in which Vicente explores whether human-tech design principles can be used to create more user-friendly everyday technologies also be used to design "green" products and systems that are more environmentally friendly. At first, I got excited because I thought that he was going to discuss something along the lines of planned obsolescence along the lines of what Annie Leonard has been looking at with her "Story of Stuff Project", and her more recent movie called "The Story of Electronics" (see embedded video below).





I was a bit disappointed when he started talking about student projects. Student projects from 1994 mind you. He discussed the amount of energy being used by PCs, and that people forget to turn off their computers at night, so some of his students designed "his favorite" project, the Power Pig which was an on-screen reminder to workers to power down their PC at the end of the work day.


Vicente also discussed nuclear power plants once again, this time focusing in on the Three Mile Island disaster. There were several flaws in the design process (which Vicente outlined in detail) and due to some of these major design flaws, Three Mile Island was "an accident waiting to happen". He bombard the reader with statistics such as how much it cost to build Three Mile Island ($700 million), how many months it was fully operational (4 months) and that it cost $973 million to clean up all of the contamination. It's too bad that nuclear power gets so maligned in this book. I thought that I would look up some statistics of my own. No one was killed during the Three Mile Island accident. While people did die at Chernobyl, and many people got sick, poor design and safety violations were so egregious and numerous that the International Nuclear Safety Advisory Group published a 148-page report in 1993 detailing every possible thing that went wrong and how it could have been easily fixed. That doesn't change the fact that everyone around the accident got massively screwed in a big way, of course, but it seems that our initial estimates of the long-term damage of a nuclear event may have been exaggerated. Apparently we should be afraid of every other kind of energy production though. For example, coal kills more miners every few years than the initial blast at Chernobyl. This, of course, doesn't take into account air pollution from coal, which dwarfs those numbers yearly. But come on, that's not really surprising, is it? We know coal is bad for us -- that's why we're developing all these great green forms of energy. They're renewable and better for the environment.
Unfortunately, they're actually not necessarily safer than nuclear energy for those involved in producing them. A study found that in Europe alone, wind energy has killed more people than nuclear energy and, worldwide, hydroelectric energy has, too.
The leading cause of accidents involving wind energy farms is "blade failure," which is when a turbine blade breaks, sending shrapnel flying through the air. I guess I'm just getting tired of Vicente's fear-mongering in this book about nuclear power.


Chapter 6 is entitled "Staying on the Same Page: Choreographing Team Coordination", and it focuses on one aspect of "soft" technology (as Vicente defines technology) and that is that "designers must create a system that is tailored to the characteristics and needs of the team as a distinct entity in its own right. If they don't, the system won't run effectively and accidents will occur." (Vicente, p. 156) He looks at examples in the aviation industry, and how a crew crashed a plane by having everyone in the cockpit becoming so focused on a burn-out light that they lost sight of their primary purpose: to fly the plane. He details the CRM (Cockpit Resource Management) training that is now standard in the aviation industry. The aviation industry seems to be trying to learn from their mistakes, not only in the design of the cockpits but also by trying to learn from "near mistakes" (the ASRS, Aviation Safety Reporting System as outlined in Chapter 7) and interactions between the cabin crew. He also looked at the perceived infallibility of doctors in Chapter 7. Apparently, in Alberta it's becoming easier to report medical errors and the Health Quality Council of Alberta has issued a foundational document called the Patient Safety Framework for Albertans which was developed to guide, direct and support continuous and measurable improvement of patient safety in the province. Hopefully, these new reporting procedures will improve patient safety in the province as the ASRS has improved the aviation industry.




Saturday, November 6, 2010

Catching Up Part 2

I think for me one of the more interesting readings recently has been the Krathwol article on the revision of Bloom's Taxonomy. Teh original Bloom's Taxonomy was a topic that was taught to us beginning teachers as a lead-in to lesson planning and writing of learning objectives. I had heard in previous Master's classes that the taxonomy had undergone some revisions to suit the 21st century classroom, specifically to make references to Web 2.0 tools. I thought that I would do some investigating on my own and look at the differences between the original taxonomy and revised taxonomy. One of the first things that jumps out at me when comapring the original taxonomy and the revised taxonomy is the chnage in language and even the rethinking of the hierarchy itself and what is considering higher ordered thinking today. In the orginal Bloom's Taxonomy the hierarchy of thinking was ranked from Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. With the exception of Application, all of the categories were further subdivided. In the revised taxonomy we see a shift in the way in which these categories are ranked and how they are written. There has been a shift from nouns to verbs in the taxonomy. in the revised taxonomy you can see the shift from the nouns to verbs really clearly as the new hierarchy is:  Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. What is particularly exciting for me is discovering all of these new resources that link the revised Bloom's Taxonomy to technology, specifically Web 2.0 tools.



I've been drawn to the work of Andrew Churches and his wiki educational-origami. Churches is providing teachers with practical links between the revised Bloom's Taxonomy and Web 2.0 sources. I would highly recommend his article embedded below on Bloom's Digital Taxonomy.

Wednesday, November 3, 2010

Catching Up Part 1

It's pretty apparent that I have fallen far behind in my postings. I can chalk it up to doing the final push of marking right before Midterm Report Cards were due. My marks have been finalized and submitted and Report Cards go home on Friday. I find that I've been very good at sticking to one week turnaround for my marking this semester, but I've been negligent with my readings and postings here on the blog. Hopefully over the course of the next few days I'll be able to push through some readings and post some responses.

I like how when we look at the course readings there are all these hyperlinks to other readings and videos that we can explore. I think I started out looking at the NetGen Skeptic and then ended up watching a video from Oxford University that was looking at "resident" versus "visitor" categories of Internet users. This sort of builds upon Prensky's ideas related to "digital natives" and "digital immigrants". I don't know if I agree with Prensky's thesis. I find that it is very "black" and "white" in its categorization of people into these broad categories of people that have grown up digitally with ubiquitous technology all around them and others that have to learn about digital tools, and even if they learn how to use these digital tools they will have some sort of "accent" online. Sure, kids today have the world at their fingertips, they think nothing of the amount of information that they have access to just with a few clicks of the mouse, and there are adults that definitely don't know as much about technology as kids. I do find that young people are very knowledgeable about technology that they use repeatedly, but in other areas they may be quite ignorant. For example, with certain Web 2.0 tools and social networking sites students are able to pick things up quickly and run with it. I'm sure the students in my classroom know how to use Facebook really well, and how to upload videos to YouTube like nobody's business, but when it comes to using other Webs 2.0 productivity tools, their knowledge may be scant. I also think that there is a difference between how the current generation of students that I'm teaching views privacy and how I view privacy. They have "digital footprints" all over the web, and may be too open and too sharing of personal information online. I think it is exciting to teach students today, I just wish that I could use fully the Web 2.0 tools that are at my disposal because some of them are so easy to use, with minimal instruction, they can figure out to use them rather quickly. I guess I just have a problem being labelled "digital immigrant" or a "visitor", either way it feels like I'm on the outside looking in.

I also had a quick peek at the book review for Born Digital: Understanding the First Generation of Digital Natives.  One quote that stood out for me in the interview when John Palfrey said, "A key advantage of using technology in education is that, through its use, we can give young people the digital media learning skills that they need. Right now, we are not teaching young people to sort credible information from less credible information online, despite the proliferation of sources and the extent to which we know young people are relying on such sources. Technology can also be very engaging and interactive and -- truly -- fun for young people to use as they learn." In this part of the interview Palfrey is talking about technology not being a panacea, and using technology for the sake of using technology. He's also talking about making sure that educators teach students being able to sort through the massive amount of information that they have access to, and being able to find credible information. I also find that students also are tempted by the easy access to information, and not understanding the difference between common knowledge and not knowing what would be considered plagiarism. A lot of students think that copying and pasting something from the Internet and "putting it in their own words" (this usually consists of extensive use of a thesaurus) will suffice. despite the fact that I go over proper research techniques prior to any research project and talking about how to avoid plagiarism it still happens a lot. I find that students will try it despite my warnings. I warn them that my first degree was a Bachelor of Arts in History and that I am very good at doing research, and that I'm probably better at it than they are at doing it. If they can find something easily online, so can I. Almost every semester I catch a student plagiarizing and they receive a zero for the assignment. At least it gives me an excuse to play a scene from "Good Will Hunting" and we can talk about plagiarism and how being unoriginal is one of the worst things to be.

Sunday, October 24, 2010

October 24

Here isa copy of the my group's flow chart for our information design. Click on the concept map image to view it in a larger size.


I'm also going to embed the Prezi here on the blog as well.



Here is a copy of the written description of our group's written description.