Strict engineering guidelines, consistent visual design per unit, 300px max heights due to advertising costs
Data-relevant deep link carousel unit for advertising
Client Project, Iterative Experiment Loop
Michael Beckman (Stakeholder), Lakshmi Manikantan (Project Lead), May Wang (Product Designer), Tommy Murray (Product Designer)
URX is changing the way publishers deliver mobile ads to consumers by providing engaging advertising units that are directly related to the content on the page. Our task was to optimize the design of the carousel unit to drive more click-through and swipe through traffic.
When we began, we followed standard usability testing methods in order to test the engagement of the carousel. We recrutied users who read on their mobile phones frequently. We gave them a phone which had an article from Wikia, and we asked them to go through the article, interact with it and narrate their thoughts.
However, pretty early on it the process, we realized this was not the best method to test the carousels for these reasons:
- Without a clearly defined task, the users commented on anything and everything they saw, such as font sizes in the article, the publication's logo etc.
- People can’t predict future behavior i.e would they have clicked on this Ad?
- Users weren't keen on reading a random article they had no interest in
- We got no insights on the design of the carousel unit
So we decided to delve deeper to understand what factors were at play each time a user came across an article with a carousel in it.
So then, what factors affect whether a user will click on the carousel?
Our team's process relied heavily on testing and iteration, based on established hypotheses To generate design solution ideas for the music carousel we ran a design studio inspired by the Google Venture's Design Sprint.
Below are the 6 design solutions that had the most votes in the design studio. We decided to run an experiment loop on our top 4 solutions in order to determine a clear cause and effect from each solution.
Based on the top voted ideas, we created high fidelity mock ups of the ideas that we wanted to test.
In the loop, we developed a hypothesis for each design solution, then we tested that solution individually on 5 users. If that particular design solution met our target metric, then we used it in our next round of testing while also adding the next design solution. If not, then we moved forward without it.
To establish a baseline, we started by testing URX's original music carousel unit on 5 users. This video shows the instructions we provided all 20 participants that we tested during the experiment loops. We would then leave the room and after 5 minutes we'd return to ask participants about their experience with the prototype.
The control performed better than expected with all 5 users clicking, but only 2 users clicked on first pass.
USABILITY TESTING & ITERATIONS
Hypothesis 1: Adding a play button makes for a clearer CTA, and 3/5 will click on first pass.
Hypothesis 2: Adding carousel indicators will encourage users to swipe, and 4/5 will swipe.
Version A experienced a 100% increase in users clicking on first pass and 400% increase on users swiping. As a result, we decided to move forward with the play button and the carousel indicators.
Hypothesis 1: Adding “Listen On” text to the top bar will make the player more native, and 3/5 will think it’s a native player.
Version B did not perform as well as we expected. Adding the "listen on" led to only 1 user thinking that the unit wasn't an ad. In Version C, we abandoned the "listen on" copy and added a progress bar.
Hypothesis: If we add a progress bar, 4/5 users will think the carousel looks more like a native embedded player and less like an ad.
Adding a progress bar led to our highest performing unit. We moved forward with the same design in Version D but consolidated the unit to a single song.
Hypothesis: If we show one song rather than three, number of choices will be reduced, and 4/5 will click on first pass.
The single song music player performed well in most categories, but not on the most important metric, with only one user clicking on first pass.
Across our experiment loop, Version C (pictured below) performed the best. The unit provided a 50% increase on clicked on first pass and 500% increase on swiping. We recommended this version to URX, but suggested that they do additional A/B testing in a wider sample pool to validate our findings.