Archive

Archive for the ‘shopping cart abandonment’ Category

Usability vs. UX: analysis of case studies

April 12, 2011 Leave a comment
Title of Study Results Tools Notes
1. Sullivan, Patricia. “Beyond a Narrow Conception of Usability Testing,” IEEE Transactions on Professional Communication, 32, 4, (December 1989):256 – 264 Suggests new frameworks for viewing usability studies methods and interpreting the validity of their results. Postulates that “a growing number of psychologists, engineers, and technical
communicators want to make the user more integral to the
whole development process.”
An analysis of other’s methods Questions Plain language movement probably has some influence on her too although not cited.
Title of Study
Results
2. Hassenzahl, M. and Tractinsky, “User Experience – a Research Agenda.” Behaviour and Information Technology, 25, 2, (March-April 2006): 91-97 Suggest a new theory of UX where designers exert control to ensure that a positive experience becomes certain. UX is about contributing to our quality of life by designing for pleasure rather than the absence of pain.
Conducted a literature review of proposals received.
One can sense the rhetorician at work who works to craft the pleasing experience and downplay any lack of quality. iPhone antenna problems for example.
Title of Study
Results Notes
3. Nielson, Jacob. Writing for the Web. http://www.useit.com/papers/webwriting/ Suggests many best practices to follow and also suggests further study of papers and books – and then finally recommends that one enroll in his courses.
Years of usability studies and analysis drawn from that body of work. A website’s rhetoric will be less effective if users find it difficult to read. Notable in that a brief space many salient points of how people read – and how writers should take this into account when creating online communications.
Title of Study
Results Notes
4. Obrist M., Roto V., and Väänänen-Vainio-Mattila K. “User experience evaluation: do you know which method
to use?” CHI 2009, April 4 – 9, 2009, Boston, Massachusetts, USA. Extended Abstracts 2009: 2763-2766.
Unknown – this was an abstract. However the questions were particularly illuminating. Contributions from conference attendees on current known methods. Creation of a Special Interest Group (SIG) that will identify and gather
people interested in UX evaluation in different application
areas and contexts. results.
Can we ever really know how the user feels? Do they even know? Or can we only influence positive feelings and minimize negative ones?
Title of Study
Results Notes
5. Bevan, Nigel. “What is the difference between the purpose of
usability and user experience evaluation methods?” Internet paper, http://www.nigelbevan.com/
Bevan notes a weakness in the methods – no metrics or requirements. He states that “user experience
seems to . . . .focus on evaluation [which] has preceded a concern with establishing
criteria for what would be acceptable results of evaluation. That comment was useful as I, too, wondered where the UX standards were.
Rigorous analysis of the UX methods and creation of a categorization of usability measures reported. He then compares and contrasts each method as to how it measures UX or usability. usable as roadmap of what one is measuring and how to do it better
Title of Study Results Tools Notes
6. Rodden et al, “Measuring the User Experience on a Large Scale: User-Centered Metrics for Web Applications”, Proceedings of CHI 2010.http://www.rodden.org/kerry/heart Creation of a UX framework – HEART: (Happiness, Engagement, Adoption, Retention, Task success). This was used to measure user satisfaction for a major redesign for iGoogle. They reported an initial decline in their user satisfaction metric (measured on a 7-point bipolar scale). However, this metric recovered over time, indicating that change aversion was probably the cause, and that once users got used to the new design, they liked it. With this information, the team was able to make a more confident decision to keep the new design. Happinesswas measured via a weekly survey on a 7-point bipolar scale).Engagement% of active users who visited 5 or 5+ days of the last week.Adoption how many new users? (i.e. # of accounts created in a week).

Retention how many users are still present (i.e. % of 7-day active users in a given week still active 3 months later).

Task success efficiency (e.g. time to complete a task), effectiveness (e.g. % of tasks complete), and error rates.

It makes sense to add a scale to UX measurements. Couldn’t it go to 11? Is it wrong to apply usability metrics to UX?
Title of Study
Results Notes
7. Large organizations need to track and compare their online sales, customers and trends such as shopping cart abandonment. http://blog.goecart.com/index.php/proven-website-conversion-tips/ Creation of overall framework to measure several factors to better identify causality. PULSE metrics: Page views, Uptime, Latency, Seven-day active users (i.e. the number of unique users who used the product at least once in the last week), and Earnings. Most of this data is proprietary and unavailable. Large ecommerce firms (Amazon, Ebay, Facebook) do have inhouse models and ongoing studies but this data is not shared nor publicly available.
Title of Study
Results Notes
8. How can Blackboard, Inc. better capture feedback and improve the UX on its web pages and software products?Presented at UX BarCamp DC in Jan. 2011http://www.slideshare.net/bbuiax/design-for-the-rudes Blackboard created a framework for capturing user feedback. RUDES: Reliable, Useful, Delightful, Engaging, Simple. Users rate each experience as the RUDES and is asked if each component exceeds, meets, or misses.Unknown – appears to be a work in process. Unknown. Blackboard staff stated that scaling factors were necessary to make better design decisions. They did not disclose how this data would be collected, analyzed or used. Worth noting that the desired answer is positioned first. How good is a survey if one tries to influence it so strongly?
Title of Study
Results Notes
9. Fornell, Claes. (2011) “Citizen Satisfaction with Federal Government Services Plummets While Satisfaction With Government Websites Remains Strong”. News release and commentary.http://www.theacsi.org/index.php?option=com_content&task=view&id=236&Itemid=259 Nonsensical – agencies mission’s vary so widely that to compare satisfaction rates means nothing. Can one compare NASA to IRS? TSA to DOI? The popup survey ACSI reports scores on a scale at the national level for more than 225 companies, and over 200 federal or local government services. causes and consequences of customer satisfaction. The surveys vary among websites so comparing one federal agency’s score to anothers is not comparable – yet it is widely done.
Advertisements

Eyetracking, heatmapping and your website

July 2, 2007 1 comment

What is eyetracking? Heatmapping? Why should you care?

For website creators its close to the gold standard for learning how users actually use your website. I attended an Eyetracking Demonstration led by Dr. Kathryn Summers, University of Baltimore & Michael Summmers, Summers Consulting, Inc. in collaboration with Nick Boswell, Tobii Technology and GSA’s Web Manager University, which produced the July 2007 event.

Eyetracking follows a user’s eyes as they try to accomplish a specific task on your website. During the presentation, Michael Summers explained that measuring exactly where user’s eyeballs look on a page – as well as where they do not look – can be vital for website managers. Web managers need to know, for example, to not put anything in the upper right corner – it won’t be seen. We’ve become conditioned to expect advertising there and our eyes avoid looking at that area.

We watched users look for the Federal Emergency Management Agency (FEMA) phone number on USA.gov. We could see from eyetracking data – represented as red lines and dots – that users were scanning furiously, choosing many paths throughout the website, making false starts, but finally completing the task. Their difficulties indicated that users needed more help to be able to accomplish this task easily. People who are in a disaster and looking for FEMA’s phone number are likely to be impatient and need to find this information as quickly as possible. One can see the logic in performing these tests as you can literally see the information through another’s eyes.

Heatmaps were created at the end of each user’s eyetracking tests. Generally heatmaps demonstrate what areas of the webpage are viewed most, and where user’s eyes lingered the longest (indicating areas of most interest). Heatmaps can generated in other ways – by clickthroughs on the links within a page – but this blog post is not about those types of heatmaps.

Rules of thumb that I learned from this demo:

I asked if there was eyetracking data that tracks shopping cart abandonment. Well – no. This research is proprietary and in high demand. Most ecommerce based organizations are highly interested in what factors lead to shopping cart abandonment, and conversely, conversion rates – how to convert users into buyers. Jakob Hencke, of Tobii Technology, said that the  Tobii forum is a good place to research that information. Google’s website optimizer forum may be a good resource too.