The purpose of HCI studies, is to assist in the development of systems that provide a positive user experience. That can't be done without involving the user at the initial stages of a new product or software.
The principles and processes of HCI are now emerging into other industries in both a digital and physical realm. Implementing HCI processes in developing products and devices for a target user groups, saves the organization time and money. HCI involves all interactions between different user groups. Accessiblity Design is a critical aspect of HCI.
My HCI research practices incorporates accessibility practices. This is a part of the CX Design research to ensure that all consumers of a product or service have an equally positive exerience.
I am often asked, what is my process from conception to execution. Ideally, I enjoy using Atlassian software, which allows me to collaborate with the development team to assist in the transition from flat design, to coded design. When Atlassian is not available I have customized the incorporation of systems such as AXURE and Trello to provide a work flow process with the team.
Often, I'm asked in the interview process, to do a mock work assignment, to address my knowledge, skill and turn-around time. Those assignments that I found, credit worthy I have indicated and shared on this page.
Interview Assignment Timeframe: Two Weeks
Submission Timeframe: 24 hours
Interview Assignment Timeframe: Two Weeks
Submission Timeframe: Two Days
Interview Assignment Timeframe: Two Weeks
Submission Timeframe: Three Days
Affinity diagrams help organize large amounts of language data to find commonalities. These assist with brainstorming ways to find the user's normal task flow. Affinity diagrams can be used with team brainstorming sessions or after conducting multiple observations, interviews or a survey.
My affinity diagrams can be as simple as colored sticky notes on a wall when working in a team environment or as complex as creating one electronically after conducting several observations or interviews.
The Heuristic Evaluation was established by Jakob Nielsen in 1995. This evaluation puts the evaluator in a user mindset and asks ten critical questions about the system.
When I complete a Heuristic Evaluation, I begin with gaining answer to three questions that will then guide the evaluation. I may be asked to evaluate the entire system, or just one task in the system. The time frame is dependent upon the depth of the system evaluation. It can be as little as an hour or can take several days.
Once these three questions have been defined, I can begin the system evaluation. This process involves answering the following questions which ultimately answer if the user will obtain positive user experience when interacting with the system.
Does the system provide the user with it's current status?
Does the system match the user's natural task flow?
Does the system provide the user with a path that is easy for the user to follow?
Does the system provide the user with consistency so the user doesn't have to guess the system's meaning or task procedure?
Is the system design to prevent user error?
Does the system provide assistance to the user should they incur an error?
Does the system provide the user with easy an easy recognize path rather than requiring the user to have to recall how to navigate to perform task?
Does the system provide the user with flexibility in performing task to ensure optimal efficiency?
Is the system aesthetically pleasing to the user?
Is the system's design minimalistic so as not to be cluttered to the user?
Does the system provide the user with a method to easily recognize when they have made an error?
Does the system provide the user with information on what the error is?
Does the system provide the user with information on how to correct the error?
Does the system provide the user with easy access to help information?
Does the system provide the user with easy access to other important documentation?
A Cognitive Walkthrough analyzes the user interface to determine if the interface matches the user's normal task flow patterns.
When I am completing a Cognitive Walkthrough of a system, I define the same answers that I asked for a Heuristic Evaluation. Once these questions have been answered, I begin step by step analyzing the user interface to ensure that it provides the user with the ability to efficiently accomplish the defined task.
Once these three questions have been defined, I can begin the system evaluation. This process involves answering the following questions which ultimately answer if the user will obtain positive user experience when interacting with the system. I provide a brief sample of my process. This particular walkthrough was done on three tasks within a system. I analyzed 174 steps going through each of the four question for a total of almost 700 touch points that I analyzed to provide recommendations to the system.
The development of usability testing procedures follows an in-depth platform designed to draw out the desired end results. A survey question of, "What is the top five tasks you perform?" will assist in narrowing down what usability testing should be performed.
A key component of the usability testing procedure is in the de-briefing questions. Asking the user, "What is one thing you would improve?" allows analysis using the Pareto Principle, also known as the 80/20 rule. This one question will provide a statistic of what 20% of the systems tasks create 80% of the user's frustration.
This platform can be used for remote and live observation testing. It can also be applied for surveys and interviews.
Qualitative Analysis is the examination of non-measurable data. This analysis is used to determine commonalities amongst user's. This process can occur after an observation, interview or survey. The qualitative measure is the language the user provided in the observation or followup interview. This is taken and organized into categories to find commonalities. Once categories have been found, then a quantitative analysis can occur. e.g. Observing 50 individual completing Task 1, 76% of the individuals stated they were confused in the followup interview.
Stake holders require quantitative data to formulate their decision. When I conduct a qualitative analysis, I understand the process of analyzing this data to obtain the quantitative measures that stake holders will need.
The qualitative analysis coding shows how individuals search and organize their recipies. How is this information helpful? By analysing the language in the observation and interview, I was able to analyze if there was a correlation between age and searching for recipes through print or online sources. Since cooking is in part attributed to an emotional memory or experience, my analysis was able to uncover that an emotional attachment can be attached to both print and electronic recipes.
What is the benefit of my research? My analysis assists the in the devopment of a website or app, that will assist in creating an emotional attachment for the user and developing online organization systems for the recipes. My research also benefits the marketing team in development of online advertising to create an emotional response to lead the user to the recipe website.
Qualitative data can be gathered through observation, interview and survey. This verbal data has to be converted into statistical data to be analyzed. For demonstrative purposes I am going to test two systems to find which system had the highest learnability rating. To do this test I wanted a combination of a prototype and survey to take place utilizing System Usability Scale (SUS) questions. These 10 questions provide a measure for further statistical testing. The succeeding sections will demonstrate how this is done.
For the sake a statistical testing, I will also show how to take this data in SPSS and do statistical analysis.
For demonstration purposes I have mocked up a demo using Google survey on how this can be achieved. As shown here, you can embed a workable form into your website for your user's to test. Fill free to complete the form.
Click inside the form for the form's scroll bar to appear.
As seen in the Google Survey, I used the 10 SUS question for each system. This will allow me to compare the two systems using both SUS and SPSS analysis methods. Google Survey will provide you an excel spreadsheet with the quantitative wording. Taking that spreadsheet into excel and using find and replace you can easily replace the quantitative wording with the SUS analysis numbers.
As you can see by reviewing the questions, the odd number questions are positive and the even number questions are negative.
An average SUS score is 68. Any number above this score is above average in usability. Anything lower than 68 is below average. Based upon these calculations, System 1 received 53, which is below average. System 2 recieved a score of 79, which is above average.
With these two numbers, I can give a letter grade to each system using a bell grading curve. System 1 with a score of 53 recieves a D+ letter grade. System 2 with a score of 79 receives a B+ letter grade. While a B+ is significantly higher than a D+, further study on System 2 is needed to provide optimal user experience.
Data that is collected from Google Survey provides insight to the user's view of the system. By asking demographic questions at in a survey, I can then analyze the learnability of the system and determine if there is a correlation to age or gender.
This first slide shows the raw data from Google Survey.
Since I am testing to find a correlation in the learnability of system 1 and system 2, I am focusing on the answer from question 13 and 14. Google survey provides a pie chart and percentages of the answers from the Likert scale questions.
This slide shows how to I converted the quantitative Likert scale rating to a rating system that can be analyzes by IBM's SPSS (Predictive analytics software and solutions.) The top right shows the Likert scale rating. Below I provide a table of how I converted this data to a score of 0, 1 or 2. I then apply this score to the 22 participants who completed the survey.
My hypothesis is the System 2 will have a higher learnability than System 1. With my hypothesis in place, I can begin statistical testing against my hypothesis.
Once I take the converted data into SPSS, I am provided with statistical data. This slide shows statistical data for learnability of system 1 and system 2. I select the mean and standard deviation from both systems to apply to the next step.
Cohen's d can be quickly analyzed using UCCS website. Placing the mean and standard deviation for both system, I am then provided with the Cohen's d and effective size r numbers.
I then the effective size r number and can compare this number on the chart to determine that the Cohen's d correlation is small.
The p value (sig. 2-tailed) is .064 which is slightly higher than our alpha of .05. This shows that the null hypothesis is correct. I have failed to reject my hypothesis and I will recommend future testing on the system be done.
Delivery of findings and recommendations can be presented in print or visual display depending up the stake holders preference. Print allows for in-depth detail of the process. Print can also be presented with just the findings and recommendations.
Visual delivering can be presented in a power point type presentation or through video. My reports and presentations vary based upon the audience and directives I am provided. A combination of visual and print can provide a pleasant reporting experience for the stake holders.
Stake holders rarely have the opportunity to meet the user. Presenting findings and recommendations from the user's perspective provides the stake holders with their target audiences perspective. The video provides a sample of how this can be achieved. Following the video, a formal print brief of recommendations would be distributed and discussed.
I enjoy working individually or in a team environment. The visual and print documentation is from one of my MS HCI projects. Corporate reports and presentations are proprietary and can't be shared.