Since May 2012, I have conducted 4 rounds of usability tests on Spark. This is a collection of all that we have gathered from the usability testing rounds.

Executive Summaries:

Phase 3.2 (Layout Usability Study – August 2012)

On August 1-2, 2012, I conducted a 5-participant moderated remote usability study on Layout for Spark.

Key Findings:

• Overall, participants had a positive reaction to the prototype. The coding themers believed that this would improve efficiency whereas the non-coding sitebuilders thought this would give them the power they need.
o “I can also use this for wireframing” (P2)
o “It is faster than writing code” (P3)
o “It is going in the right direction” (P3)
• Almost all participants noticed and understood the breakpoints and it’s purpose right away. However, 3 out of 5 participants took good amount of time to understand the proper working of the prototype. This is because of the following reasons:
o Participant’s interaction mental model was somewhat different from the existing one. They expected a full and flexible drag and drop model, where every individual object can be dragged and dropped. The upward motion of an object (on resizing) took a while to be understood (by 3 out of 5 participants)
o Glitches in the prototype
o Quotes
• “Moving around [objects] was a frustrating experience” (P1)
• “If I was allowed to move each one [object], the usability would be like 8-9 [out of 5 points, 5 being the highest]” (P2)
• Although a small random sample to generalize, it is important to note that only 1 participant (out of 5) had a thorough understanding of DOM and it’s working.
• Participant had the expectation that changing the layout on one configuration will not affect others. However, they did want an option to replicate across the board easily (something like checkboxes). They also commented that the notification assurance was necessary in their learning curve.
• 2 out of 5participants want the pixel size information for each of the device configuration (for breakpoints and columns)
o “This is good but I want to know the pixel size” (P2)
• For all participants, it was tricky to resize the objects because of the small clickable area and glitches in the prototype
• One participant wanted a true representation of layout for their page (i.e. to be able to extend an object vertically as well).

Methodology and Users:

• All participants were external and were recruited through Twitter.
• Each session lasted for 30+ minutes.
• Each participant will receive $15 Amazon gift card as compensation
• Participant change the layouts for websites or have a need to do so
• Participants primarily use Drupal (6 and 7) and find the current process to be “easy enough” to “cumbersome”. They find display suite, context module and omega theme as good aids in the process.

Phase 3.1 (Inline/Mobile Prototype – August 2012)

On August 1-7, 2012, I conducted a 4-participant moderated remote usability study on inline editing and WYSIWYG for mobile.

Key Findings:

• The desktop inline editing experience worked well with high experience ratings of 4.25 (out of 5, 5 being best). There were only 3 minor issues uncovered:
o Revision control is not findable (3 out of 4 participants). Besides the date/time/ author, participants suggested that comments/description is also useful.
o Findability of “Edit” and “Save” can be improved by making the toolbar sticky (3 out of 4 participants).
o One participant was confused if “Save” would save the page or just the field. It was interesting to note that all the participants clicked out of the field box and assumed that it would save the field.
• Participants had a negative experience with editing their content on their iPhones with extremely poor ratings of 2.5 (out of 5, 5 being best). This is because of the following reasons:
o User Behavior: Participants had a strong prior preference to avoid using their phones to create/edit content of their website (They need bigger screeners, multiple windows, want to be sure that the right content is published, and they are never far away from their laptops)
o iOS Behavior:Selecting on the iPhone is cumbersome
o Prototype Behavior:
• The glitches in the prototype (the toolbar is not sticky and behaves erratically; sometimes covering the selected area, the grey area on the toolbar also comes in the way of doing the task efficiently)
• There is no way to undo changes
• Users did not understand the following icons: brackets, symbol, lower case

Methodology and Users:

• All but one participant was external. The external participant was recruited through Twitter and was compensated $15
• Each session lasted for 30+ minutes.
• Three participants create/edit/maintain content on website. One participant does create/edit content on the intranet

Phase 2: Inline Editing (June 2012)

I tested 3 internal participants (heavy content creators/ maintainers) for 20 minutes in a moderated usability study for the second round of Spark testing. The tasks focused on editing content, the WYISWYG toolbar and publishing content. Note: Although the WYSIWYG toolbar data is gathered, it is not included in the report because it is not the toolbar that we are going to go with.

Key Findings:

• The prototype tested well. It has been found time and again that inline editing is a huge win. Participants were able to edit and publish their content.
• Participant expectation is that the system will be smart to ask to "publish" content only if changes are made to it. Not having this meant that the participant had to re-read the content to make sure that no unwanted content gets published.
• The "View| Publish" interaction in the first round of testing tested better than the current "Edit| Publish". The current "Edit| Publish" has 5 issues.
• Edit| Publish can be more prominent
• Make additional changes to their settings is extremely important to the users. (Although this is not available in the current prototype, it is imperative that the final design addresses this concern)

Other Information

Moderator's guides
Detailed Reports (The report is a bit hard to read, but if you have questions – please let me know)

Comments

Bojhan’s picture

I have taken a look at the guides and detailed reports and I have a few questions. I would love if next time you could sort the issues by criticality - makes it easier to scan your findings.

Phase 3.2 (Layout Usability Study – August 2012)

- What where the participants that where tested here?
- You mention that the overall feedback was good, but then continue on discussing major problems - how do you feel it tested, is it good to continue or should fundamental concepts be revisited?
- Where you able to isolate the different parts in the learning curve? (where the UI succeeded, and failed) This would be great information to move forward on.

Phase 3.1 (Inline/Mobile Prototype – August 2012)

- "It was interesting to note that all the participants clicked out of the field box and assumed that it would save the field." Seems like we should support this default behavior?
- Beyond a personal disliking of editing on the phone and its glitches, where there any interaction model mismatches?

Phase 2: Inline Editing (June 2012)

- It probably makes sense to test participants on the desktop version outside of Acquia, especially if we want it considered for core the participant pool needs to stretch some more. I'd be especially interested how content creators with content workflows experience this.
- I'm sad to see the WYSIWYG toolbar results missing, because even if its not included it is still valuable feedback to understand why its not continued upon.
- This is the first usability testing results I see on testing the desktop version, and it seems a little sparse -is there other data? Or does this need further testing?

dcmistry’s picture

The issues are sorted by severity. The executive summary details the high level issues and the detailed report document outlines each issue (with severity in column 1, and the participant column outlines the severity of each problem by user)

Layout usability study
Participant profile is listed in the executive summary
Profile:
• Participant change the layouts for websites or have a need to do so.
• All participants were external and were recruited through Twitter.
• Each session lasted for 30+ minutes.

You mention that the overall feedback was good, but then continue on discussing major problems - how do you feel it tested, is it good to continue or should fundamental concepts be revisited?

I think it tested well. Do note that this is a formative study and the design is evolving. The major problems as I see it (highlighted in the summary) are:
• Participant’s interaction mental model was somewhat different from the existing one. They expected a full and flexible drag and drop model, where every individual object can be dragged and dropped. The upward motion of an object (on resizing) took a while to be understood (by 3 out of 5 participants)
• Participant’s interaction mental model was somewhat different from the existing one. They expected a full and flexible drag and drop model, where every individual object can be dragged and dropped. The upward motion of an object (on resizing) took a while to be understood (by 3 out of 5 participants)

Where you able to isolate the different parts in the learning curve? (where the UI succeeded, and failed) This would be great information to move forward on.

I am sorry, I did not get this. Can you rephrase?

It was interesting to note that all the participants clicked out of the field box and assumed that it would save the field." Seems like we should support this default behavior?

The team is brainstorming about this

Beyond a personal disliking of editing on the phone and its glitches, where there any interaction model mismatches?

No interaction model were uncovered in this round of testing. However, this does not mean that there were none. It simply implies that the personal preference and a bug free feature overpower the interaction model.

- It probably makes sense to test participants on the desktop version outside of Acquia, especially if we want it considered for core the participant pool needs to stretch some more. I'd be especially interested how content creators with content workflows experience this.

We have tested internal participant because they fit the profile. The profile is far more important than if they are internal or external. Also, testing internal participants has it’s own advantages: no compensation, easy to recruit and schedule interviews. The latest study on layout was tested with external participants. So, we are trying to find a balance of time and money.

I'm sad to see the WYSIWYG toolbar results missing, because even if its not included it is still valuable feedback to understand why its not continued upon.

Because the prototype tested was not the WYSIWYG that we were going to go with. Giving that information to the team would not have helped them in anyway.

This is the first usability testing results I see on testing the desktop version, and it seems a little sparse -is there other data? Or does this need further testing?

This was the very first round of testing and it is focused on high level only for efficiency’s sake and guiding the design in the right direction. It’s aim was not to go too granular because the design was being formed. Phase 3 is the next round of testing that was conducted.

Hope this answers your concerns.

webchick’s picture

Component:User interface» User testing

Moving this to the brand new "User testing" component, so we can better track these.

webchick’s picture

Issue summary:View changes
Status:Active» Closed (fixed)

We're not doing any active UX testing of Spark atm. If we do some in the future, it'll be around D8 core instead.