Brief Ratings

Many applications that provide a service request ratings. These ratings can help improve internal process and helps measure performance. This project focuses on improving brief ratings to prompt the user more frequently, improving the UX and to improve actionable feedback.

CompanyShutterstock CustomRoleProduct

Brief Ratings Case Study

Background: A brief is a project where a user can request custom content. Our team will deliver that requested content. We then ask for ratings on various aspects of the project. These aspects can range from overall service to details within custom content (ex: composition, on-brand, variety etc.).

Problem: Current brief ratings are triggered by a thumbs up/down in one spot. From there, the survey asks for a 5-star rating with a free form text box. The free form feedback provided is unmeasurable and un-actionable at scale.  The team is not receiving enough feedback or qualitative feedback that is actionable therefore this survey has created pain points for both user groups (users/internal team).

Task: Design an experience that allows users to give us quantitative and qualitative feedback with multiple entry points. Find a creative solution that gains the user’s attention, and garners an action to actively participate in rating briefs.

Strategy: Understanding the users is essential for solving any solution for a product. This can include their behaviour, actions, likes and dislikes. According to research, users are more inclined to respond when they are reminded. At times, it may not be convenient for them though they intend to do it. According to some articles, users will provide feedback when they think it can add value to their lives. A short term strategy is having subtle reminders to complete a task and demonstrating the value.

 “We give feedback because we know it’s important. Our hope is that every brief that goes through, we’re able to see that the next one that better.” — ThinAddictives

Hypothesis of Solution: After conducting user research and analyzing our platform analytics. It was evident that the current spot (thumbs up/down) did not make sense and was often ignored. Through research, we discovered that our target demographic predominantly uses email to communicate and visits our platform through a desktop. With that in mind, we wanted to improve consistency with our ratings (making stars visible) and create 2 more entry points (1 additional on the platform + 1 in email).

People need to see value in providing brief ratings and gain results in order to keep them satisfied. Shutterstock Custom should meet their expectations for these requirements by adequately measuring these success metrics and their satisfaction levels.

Design Phase: This design phase started by a deep and thorough competitive analysis, understanding common trends used when measuring satisfaction levels for products/services. UI inspiration was drawn from companies such as: Uber, Starbucks, Expedia etc.

Our team had to design a survey that allowed us to gain feedback that was actionable and allowed our Client Success Managers to start meaningful conversations with our clients. We did a lot of internal mapping between our sales team, our CSM’s, delivery and product to figure out the categories to measure about. These details allow our team to find the areas of successes/improvement.

We were able to create a collaborative list of measurable categories and iterated from them with each team. After finalizing the categories, we needed to display them within the survey to create a user-friendly way that gave our team the right information.

User Testing: When testing two potential solutions, kinsey scale vs. selecting categories, one clearly provided more feedback whereas the other was quicker to use. Through testing with users, we valued one solution over the other because it was less daunting and quicker to change. Though the kinsey scale provided more information, it deterred users from giving feedback.

Iteration: With choosing selecting categories versus a kinsey scale, we needed a way to properly ask where we succeeded/can improve on. Through analyzing common trends from our previous feedback, we realized that 4-5 stars meant our clients were satisfied but 5 stars was what we wanted to obtain. Anything rated below 5 stars, we asked them what we could improve on and then showed the categories. If they rated the brief 5 stars, we asked what we did well on.

Entry Points: We included a blue bar under our navigation within each brief as a reminder and prompt to give feedback for that brief. Like many platforms, this bar will not disappear until feedback is provided. The slight annoyance helped improve the amount of ratings. It is a UI component that many users are used to.

Email was another entry point. Due to our platform scaling quickly, we needed to automate emails by announcing that a brief was launched and to rate it. Through this email, they are able to click on a button to take them into the platform with the survey prompted. We were also able to tackle ux/tech debt by extending a user session so they did not have to re-log back in.

The final entry point was through replacing the thumbs up/down within each brief card in the main index view. We replaced it with 5 stars. When a user clicks on a star, the survey modal gets prompted. Once rated, the stars disappear for that user. We improved consistency by changing the UI to incorporate stars. There was a lot of confusion with the thumbs up/down.

Results: Since launching this feature in May 2017, we increased the number of brief ratings by 36% within the first month of launching.

Conclusion: Though a rating system is a common feature within platforms, we had to customize ours to fit our needs. The project garnered large results and allowed us to understand where to improve or what we were doing well. Brands were able to see improvements in brief quality and began to rate more often. With these changes, we’re constantly trying to improve. We found that our list of categories were very high-level and we wanted to get more granular. Next iteration, we would do that. Other areas I would improve would be email notifications. We could send another email to remind them to rate it after a certain amount of days from viewing it.