Monday, September 28, 2009

Interface Critique - Credit Card Machines



Credit card machines can be found in many public places, including grocery stores, department stores or any retail environment. However, these machines vary drastically in the interaction design despite similar or identical functions depending on the location. Even slight variation in a simple task adds confusion to an everyday act.

The first major difference between credit card transactions is that some stores have customer-controlled machines while others will have staff controlled machines. Restaurants and some stores will have the staff process a credit card. I have been in situations where there is a credit card machine placed in reach of the customer, but the cashier directs you to hand the card to them. Just finding out who controls the card swipe can be a confusing process though the presence of a machine within reach should be a cue.

An example that highlights poor design is when the teller stands next to the “self service” machine, and takes the card from the customer and operates the machine for them, clearly having experienced so much customer confusion so frequently, that the teller simply defaults to doing the action themselves.

Once the confusion of who is actually going to process the transaction has been cleared up a user must figure out how to operate the machine if they are being asked to handle the transaction. Depending on the particular machine and back of house process, this can be simple or complicated. There a few basic steps that most machines use, though I have been places where some steps are skipped and others are seemingly arbitrarily added. The steps include:

1. Swiping the Card
2. Selecting Credit or Debit
3. Approving the total
4. Writing a Signature
5. Getting a Receipt

Step 1: Swiping the Card

This seemingly simple act can be made more or less difficult depending on the design of the machine. Some have excellent intuitive designs, others are clunky. The affordances and mapping here are typically quite good. There is usually a thin slot of some sort that allows for your card to pass through. Most people understand that the machine has to read the black magnetic stripe so they know that the strip should be in the machine. If there is a shallow slot for swiping, then the possibilities are limited to having the stripe face left or right. Some machines give feedback in the form of a tone if the card was swiped correctly, others provide poor feedback by beeping if a card is inserted at all. If the customer is asked to insert a card then there are four possible card positions, although if a machine is designed to accommodate the raised writing in only one way, then it is reduced to a single option, which is the ideal.

Step 2: Selecting Credit or Debit
Most machines allow you to use either a credit or debit card to pay. Ignoring the fact that sometimes people get these two types of cards mixed up since they can look almost exactly the same. There are difficulties with this simple step. At my local grocery store, after you swipe the card, the cashier asks whether you’d like debit or credit even though there are clearly marked buttons on the machine that say Credit and Debit, I don’t know what would happen if I did press them.

Step 3: Approving the Total
Again, it all seems so simple. Are you sure you want to spend x-amount of money for these items? Yes or No? Many machines have a very confusing system of Yes and No buttons plus a separate enter button, plus multifunction buttons. I have been in situations where the screen reads “Do you approve?” and there was a yes and no button, but I was instructed to press “Enter.” At my local grocery store it is now reflexive for at least half of the cashiers to reach over and press the “Enter” button for me. I am familiar with the process, but the word “enter” is rubbed off many of the machines so it’s understandable why the cashiers take initiative. The danger is that they approve a debit that a customer doesn’t want to approve.

Step 4: Writing a Signature
Yes, I learned how to sign my name in 3rd grade, but I look like I’ve regressed to 2nd grade when I sign my name on some of these machines. There is such wide variation between machines that it is difficult to know whether you are expected to sign a paper, sign electronically or not sign at all. Most machines do have a certain degree of mapping when it comes to signatures. If they want an electronic signature there will probably be a stylus somewhere nearby unless it’s been ripped off and lost. There will also be an obvious area where the signature should be placed. Some machines separate this area and some machines have it directly in the main window. The assortment of procedures and the often sluggish response of the handwriting system make for a messy interaction. To add to this I have been places where the signature area is hopelessly scratched up either by pressure from the stylus or from people mistakenly using real pens on a digital surface. The risk in the case of a design that uses the main screen as the area for a signature is that if it becomes scratched, then even when it is in other modes displaying information about actions and totals, the display is unreadable.

Step 5: Getting a Receipt
Some machines will print a receipt for you, sometimes the cashier hands it to you. Sometimes two are handed to you. One you are supposed to sign and the other you are supposed to keep. Depending on the store, the receipts can look identical, although there is a particular one the store is supposed to keep. A good or experienced cashier creates their own affordances by handing you one receipt with a pen and then trading you copies after you have signed.

Good designs do exist. The machines in Target tend to be a good example. Other stores have the credit card machine on top of the counter facing the cashier, which is uncomfortably high to prevent thieves from reaching into the cash drawer, but it also means that people who are average height or lower have to stretch a bit to read the credit card machine. At Target, the machine is on a low counter where you can comfortably/ergonomically look down. The card can be inserted in only one way and the screen tells you what the physical buttons do.

The ideal process would be completely standardized; something like the tap to pay system would make the process effortless, which is what good design accomplishes.

The Design of Everyday Things: Norman- Chapter 2

Annotated Bibliography
In the second chapter of his book, Donald Norman discusses the psychology behind the use of everyday things. He outlines the thought process that most people go through to accomplish a task. He also talks about misconceptions that people can have and the importance of providing a good concept model that clearly shows the actions to be taken. He also says that the object should "provide a physical representation that can be directly perceived and that is directly interpretable in terms of the intentions and expectations of the person."




My Thoughts
My experience with a "gulf of evaluation" was in a "virtual surgery" exhibit. The exhibit has a device called the "Falcon" that provides haptic feedback, so it feels as if the visitor is physically conducting a heart surgery. The device vibrates harshly as you saw through a bone and provides resistance as you cut into the heart. Users hold a knob and manipulate the machine in 3D space. The exhibits were experiencing a very high failure rate because people were being rough with the Falcon device. There was a rubber casing the Falcons were fitted with to protect them. Once we removed this rubber casing, people could see the device and realized that it was a delicate machine and should be handled that way. When the rubber casing was on, they treated it like a joy stick and yanked and pulled quite hard, breaking the machine. By providing a proper view of the device, people's "gaps" were narrowed and most polite people better understood how to use the exhibit, resulting in less breakage.

Sunday, September 27, 2009

Chapter 1 "Designing Gestural Interfaces" and "Tap is the New Click" by Dan Saffer

Annotated Bibiliography
In the first chapter of his book, "Designing Gestural Interfaces" and in his "Tap is the New Click" presentation at Stanford, Dan Saffer outlined the conventions for gesture interfaces. He warns designers to use gesture interfaces in appropriate environments and to create a safe space for users. If the desired action requires delicate manipulation then the use of more traditional tools might be appropriate, but if the desired action is more commonplace then gestural interfaces should be intuitive and approachable. People don't want to appear ridiculous waving their hands around in public trying to get something to work, so gestural interfaces have been introduced in more private places like restrooms and the home.
Saffer also discusses how the type of interaction and the type of sensor available have an impact on each other. Sensors can judge, light, pressure, proximity, acoustics, tilt, orientation and motion and the type of action a user takes must of course match those sensors.
Saffer says that good gestural interfaces have some common characteristics, such as being discoverable meaning not so hidden a novice user can't operate it. It should also be trustworthy and attractive. The system should be responsive and provide feedback. It should be appropriate for the culture, situation etc. It should also be meaningful, smart, clever, playful, pleasurable and good.
The author mentions the importance of physical considerations, such as the fact that for touch screens, the hand will block any information that is below a touch point, so interfaces should be designed with this in mind.
Gesture interfaces take a lot from traditional interfaces, but designers must consider additional factors when creating them.

My Thoughts
One point that Saffer makes is that people don't want to feel like idiots and might feel more comfortable using gestural interfaces in the home or more private spaces. Another consideration for using gestural interface in more private spaces is that it prevents accidental triggers. When people are alone they don't tend to gesture as much as when they are out communicating with other people. So an accidental trigger is less likely.
Saffer touches on the idea that the type of sensor is important for the type of interaction. This is VERY important for designers and engineers to communicate on. Recently, when designing a robotics exhibit to teach the idea of programming to children, I experienced a miscommunication where the physical robots did not have the attributes that they were assumed to have. We referred to the robots as "hearing" things when there were no acoustic sensors, it was actually wireless transmission of signals taking place. This caused real confusion when the action of the robots did not perfectly match what people were seeing take place given the script and expectations. I think engineers will work at things and create a desired action, without explaining the more technical details, but when a project gets modified based on certain assumptions, that is when work arounds can cause problems and the importance of matching sensors to actions comes in.
In discussing nuanced gestural interface, the "smile detectors" on some new digital cameras comes to mind. Gestural interfaces could be created based on mood detection and emotional responses. If you're in a good mood, peppy music could automatically play etc.
In his talk, Saffer mentions how fake nails can be awful for touch screens. I wonder if we'll ever get to the point where fake nails are created to be used as a stylus and purposely created and worn for computer interface work.

In this gesture interface situation in a museum I have seen first hand what Saffer is talking about when he says functions like hovering are not good for gestural interface. In this dance motion capture exhibit, people are asked to manipulate a cursor (another thing saffer says to avoid) over a "next" button and hold it there. Visitors have a difficult time doing this and it is not instinctual to hold their hand still in a point in space. They often need assistance with this. It might have been better to design a specific action sequence for the "next" function. Any other ideas on a better option? Perhaps a clear swipe of the arm from side to side?

"What Every Game Developer Needs to Know about Story"-Game Developer Magazine

Annotated Bibliography
In the article "What Every Game Developer Needs to Know about Story" by John Sutherland, the author outlines the basics of classic storytelling and explains how they are applicable to game design. Sutherland emphasized that video games are a type of story not just a toy. He explains that basic story structure includes a hero, inciting incident, a gap between the hero and ordinary life, and then a risk and unexpected reversals that take place. The hero has to overcome difficulties to reach an 'object of desire." Sutherland outlines the types of conflicts which include Internal, Interpersonal and External, and explains that external conflict often happens most naturally in movies and games. He suggests that having writers involved in game design from the beginning is important for overall story structure, not just intermittent dialog.

My Thoughts
The idea of story is something that applies to game design but also to educational design and many other forms of design and creativity. Even when composing a photograph, a good photographer will look at a scene and try to tell a story through an image. Stories are what makes for compelling material, whether it is educational or entertaining. Though it is very important for educational designers to keep in mind, since motivation is often one of the main obstacles that teachers and educators face. If the interaction design provides motivation through a story line, not only will students want to participate, but the hope is that they will learn more deeply because they are able to connect information into a structure or context that allows them to access the information and recall it more readily.
Even in areas like museum design, there should be a story in place, so that all the exhibits are not isolated and disjointed, but flow together to tell a cohesive story that engages visitors. For example, one of the more interesting museum designs that I have heard about recently was in the United States Holocaust Memorial Museum in Washington D.C. where visitors are given an identity card, and they experience the museum through the story of their person, and at the end they find out if they went to a concentration camp or if they survived etc. This really drives home the idea of story within education and museum spaces.

Monday, September 21, 2009

Iconic Autobiography

Here is my iconic autobiography from birth to present.


Sunday, September 20, 2009

Donald Norman: The Design of Everyday Things - Chapter 1

Annotated Bibliography
In the first chapter of his book, The Design of Everyday Things, Donald Norman discusses everyday objects that bring frustration into the user's life. Norman says that when a person is unable to use a simple object then it is the fault of the designer and not of the person, though people have a tendency to blame themselves for not being able to figure out how to operate an object. According to Norman, good design has a few key characteristics, including affordances, good and visible conceptual models, natural mapping and feedback. Affordances are what objects allow to happen naturallyl Conceptual models are how people think an object functions, this should be matched to how it actually functions through design cues. Natural mapping shows the relationship between things and feedback allows people to make adjustments to their actions and see if they are correct. Norman talks about the trade off between visual simplicity and conceptual simplicity using the example of a phone system. At the extremes there could be a button for every function, or there could be just a few buttons that have to be pushed in certain combinations in order to achieve the desired function. Both are confusing and good design finds balance between the two.

My Thoughts
This concept is very important when it comes to education. I watch people try things and call themselves names for being unable to do them whether it's open a drawer or accomplish a more difficult task. People tend to assume it's their fault and that they're dumb or incompetent. When it comes to designing educational software, the hazard in bad design is that by making a student feel incompetent they are less motivated to try again or to have positive associations with a particular subject or even school in general. Good design could have enormous impact on education. The challenge with designing for education it seems is that good design will be different for each person because mental strengths and weaknesses vary. Physical constraints become challenging to incorporate in a virtual environment, but it is not impossible.

Also, there is a door in my office building that I always push instead of pulling. I felt stupid about it every time until I read this. Now I see that the affordances are all wrong and the reason is that the door used to swing in the opposite direction but it was switched when the space was renovated. They just didn't change out the hardware. Now I feel smarter than the door.

Saturday, September 19, 2009

Wolfgang Schnotz/Maria Bannert-Construction and interference in learning from multiple representation

Annotated Bibliography
In this article, the authors propose an alternative to Paivio's dual coding theory. They hypothesize the the dual coding theory, which states that people have two channels for information input, is too simplistic and that the type of pictures can have an effect on learning. Their structure mapping hypothesis says that the mere fact that pictures are present along with words does not not mean that learning will be enhanced. Instead, they predict "different effects for different pictures on performance in different tasks" They studied whether The type of pictures that are presented have an impact on the learning task if they are matched or mismatched with the type or structure of the question. The findings showed that the dual coding hypothesis was not completely correct since different pictures showed different effects when combined with various questions or tasks. The author says that the import of this for instructional design is that pictures are not always appropriate tools since different learners have varied needs and can benefit or it can be to their detriment when pictures are included. Also the type of image that is included is very important since again it can aid or hinder learning.

My Thoughts
This research is subjective because the way an image is structured can certainly change learning outcomes, but so can the way a paragraph is structured. I'm not clear on whether or how you would control for the structure of language. I know they used questions in different formats, but it still seems like there is something missing in this experiment. Perhaps what bothers me is that they only meausured whether students got an answer right, not how much time they thought about it. I would be curious to know what the data would look like on the amount of time it took students to answer questions because it may give insight into whether students who got the mismatched images right took more time to think or if they just got the concept as a whole regardless of the images.

It seems bold for cognitive researchers to talk about the path information paths in the brain. That seems like a subject best left to neuroscientists. Though I understand that it is important because it affects the outcomes of cognition.

Friday, September 18, 2009

Schneiderman-Information Visualization

Annotated Bibliography
Schneiderman discusses the types of information visualization and breaks them down into categories based on the number of variables involved and the types of data. These include 1,2,3 and multidimensional data types as well as temporal, network and tree. The author also discusses the main tasks that are involved in data visualization which are Overview, zoom, filter, provide details-on-demand, showing relationships and history and allowing for extraction. Different examples of graphs are shown and the author discusses the purpose and usefulness of each type in different situations. The author says that the advantage of visualization is that humans are well equipped to process visual information. They can use graphs and visualizations to understand relationships between data points at a glance. The author also emphasizes use control when it comes to asking for details and viewing history or undoing history and exporting the information to be used elsewhere. Schneiderman describes some of the challenges of information-visualization. These include how to organize data so that the input is correct, how to combine visual and textual labels, how to allow the user to access deeper or related information, how to view large volumes of data, and how to integrate data mining. Other challenges are how to aid collaboration and achieve usability for a diverse group of users.

My Thoughts

What are excentric labels? They are mentioned on page 598 in the context of mapmakers and user-controlled approaches. Are they like pop up help?

In the context of building exhibits in Sony Wonder Technology Lab (SWTL) an interactive technology museum, information-visualization concepts are very important. The users are diverse not only in age but also in nationality and the educational concepts need to be clearly explained.

In the signal station exhibit, students are supposed to learn what a pixel is. The exhibit automatically uses a call out box to zoom in on a picture to show a pixel. It seems that it might be more effective for the user to zoom in to experience the details for themselves so they are better oriented in space. Although progressive refinement is meant to describe information refinement in this article. Allowing for progressive refinement of the picture zooming into a pixel in this case could teach the lesson more effectively.

Information-vizualization challenges including the question of how to input data makes me think that some form of standardization would greatly benefit science and other fields. There is probably a lot of research that has been done or connections that have been made that have fallen into disuse or been forgotten over time. If there were a standard that allowed information-visualization systems to make connections between discoveries and knowledge from disparate places and even languages we could probably make some great discoveries. I guess you could call it data mining, but the data needs to become more uniform so it can be mined more efficiently.

Wednesday, September 16, 2009

Design Notes

I was babysitting an 8 year old girl one evening and she wanted to go online to play games. She loved the Polly Pocket and Bratz doll websites. In one of the activities, the visitor is able to choose some objects to decorate a room, one of the choices was a jukebox. She turned to me and asked what it was, since it wasn't labeled. I explained it to her, but I thought it was a great example of icons and cultures. For her, a jukebox was completely unfamiliar. Obviously the designers were adults who didn't think about the fact that most children wouldn't have knowledge of such an object.

Another thing this girl did unprompted, was to create a story for the activity. She knew that the activity involved decorating a room, but she gave it her own context, she decided the character is a hip, trendy, fashionista and was going to have a party. Then she decorated the room based on those qualifications. The girl turned what would have otherwise been a somewhat dull activity into one that had emotional meaning and personality.

This girl was obviously intelligent, but I wonder, would the interaction design have been improved if the user was prompted to make up a background story, or would most children make up their own story without being prompted? Is it better to leave things open ended or provide more guidance/direction in this case?

Sunday, September 13, 2009

Hall, S. (1997). Representation, meaning, and language. In S. Hall (Ed.), Representation. Cultural Representations and Signifying Practices, pp. 15–

Annotated Bibliography
In his chapter on Representation, meaning and language, Hall discusses three approaches to representation: reflective, intentional, and constructionist. Though the chapter focuses on the constructionist approach, the reflective approach is defined as one where words simply convey meaning already existing in the object or world. This is discarded since, words can also represent things that do not exist, or exist in different states, such as a sheep and a drawing of a sheep. The Intentional approach is explained as being very relativistic, meaning that everyone creates their own meaning and that it is specific to the person. The author makes the argument that this is not an accurate approach because then everyone could speak in their own made up language and expect to be understood.
The constructionist approach is more of a living organic one. The meanings and the words, signs and language evolves through time along with social norms. In this approach, things themselves don't have an inherent word or meaning associated, we ascribe those meanings to different things. The key is that the things can be differentiated, for example different colors or even types of snow. By being able to identify differences between things, we can assign a value however mutable and use that as its meaning.
The linguist Sassure contributed significantly to representation and linguistics. Sassure broke language or "signs" down into two elements, the actual word or photo etc and the concept of the object in your mind. He calls these the signifier and the signified.
My Thoughts
In terms of interaction design, the idea and definition of representation is important because often the only way a designer communicates with a student or end user is through representation and language. It is important to keep in mind that these students might be part of a culture that is different than the designer's which could impact the understanding and ability for students to learn, or for proper communication of material to take place.

Thursday, September 10, 2009

Plass, J.L., & Salisbury, M.W. (2002). A living systems design model for web-based knowledge management systems. Educational Technology Research & Dev

Annotated Bibliography
This article outlines the process of creating a knowledge management structure for an organization that is also a living system, in that the users can contribute to the system in order to help it grow and the system itself can analyze the users and adapt in order to accommodate the users needs. This is a function of the organization which has a large contingency of more experienced learners who will soon be leaving for retirement and the organization would like to capture their knowledge and experience so it can be used by the incoming group of workers. The system will be living with an environment with changing conditions, therefore the model used to create the system must be constantly evaluating the environment at each step to determine whether needs are being met. According to the Iterative-prototyping approach to software development, the steps are a nearly linear process of evaluation, establishing the problem space, designing solutions, implementing solutions and final summative evaluation and delivery. (p. 37) This process combined with the Instructional systems design (ISD) approach were combined to create the Living-Systems Approach. The steps of this approach are. Analyze the end-user requirements, design instructional information architecture, develop instructional interaction design, develop instructional information design, implement system design and conduct developmental evaluation (pg. 40). The purpose of this design is to accommodate changing learners and environments in which the final system will live. The article outlines the implementation and use of this method in creating software for a government agency. It concludes that this new design was necessary since the standard ISD process wouldn’t work for a situation with changing needs and evolving design.
My Thoughts
It seems that this type of system will be more in demand as technology progresses. Though the initial investment may be greater, the possibility of having a system in place that can grow and change with the organization is tantalizing. One concern would be that users would stop using it after awhile. Many times resources go unused because people prefer to muddle through on their own, similar to the way people don’t read instructions or ask for directions. Another concern is that a living system will grow, but not necessarily remain trim. The article doesn’t discuss what happens to outdated information or whether it reacts to policy changes that would make certain information on the site obsolete. The danger becomes that the system could grow and become so dense and bogged with information that people quit using it, if it isn’t carefully and clearly organized and accurate. The whole thing very much depends on the situation and the content. This structure works well in this particular case, but it remains to be seen whether it can be generalized to other living system applications, though there doesn’t seem to be any reason why this wouldn’t work.