A revolutionary new tool for university performance management

I worked with Watermark Insights to design a powerful new tool for universities to manage their performance. University performance is assessed by student exams, class assignments, course enrollment, and other metrics (also called as measures). The goal was to dramatically simplify their collection of data and empower universities with analytics to improve their performance.

Visualization of results

Problem Statement

  1. Provide program leads with a tool to record their assessment activities for assessment purposes.

  2. Helping universities collect student and administration data to inform university performance.

  3. Aligning the collected data to university performance and helping organization leaders in analyzing the data.

Timeline

12 months

Team

I was responsible for research and design of this tool. My team composed of a project owner(there were a few organizational changes while in this project) who provided us with the business requirements and a team of passionate software developers and SDETs.

Project Journey

Research

  1. Interviews with Internal stakeholders - I started this project by reviewing existing research and interviewing internal stakeholders. This was my first method of research as it was low in cost and provided an overarching view of user needs.

  2. Querying the legacy systems - There are some legacy products by Watermark and some of them touch on this concept a little bit so running queries on them. This gave me some quantitative data on assessment activities used by programs.

  3. Competitive Research - I reviewed softwares like Campus Lab, AFEIS, E-Lumen, Tk20 and AMS by Taskstream. I also looked into LMS (Learning Management Systems) systems like Blackboard and Canvas. This helped me get a better understanding of the user’s world.

  4. Interview with actual users - This provided me with some actual use cases. It was way more detailed and helped increase user acceptance.

I conducted 30-40 interviews throughout this project. I created interview guides with screening questions and recruitment methodologies to guide the process. Most of these interviews were conducted via audio/video conferencing tools, while 6-7 of these interviews were conducted in person. I used visual artifacts to help explain some more complex concepts.

Artifacts used for interviewing

Research Findings

  1. Universities had their inconsistent processes/timelines for collecting student/administrative data. They also used different formats.

  2. Program leads had a difficult time communicating data collection requirements to the faculty members and following up with them.

  3. Faculty members were averse to using a new software products for providing assessment data. They wanted something that doesn’t require a lot of efforts from their side.

Sketching

Next I created user flows and wireframes to gather feedback from my stakeholders. I created several iterations to reach the final design. These iterations were tested with users and subject matter experts (SMEs). Each iteration helped me define the product better and improve its user experience.

Iteration 1

Iteration 1: A basic form to fill measure details

Value & design: In this iteration, assessment activities were recorded in the system through a basic form. This form was composed of the information required to define a basic student activity (Measure).

Insights:

  1. This design helped test our basic understandings of a Measure and its terminologies.

  2. Interviews with Knowledge Experts revealed a need for detailed documentation of the activity. The form was not sufficient to cover for all the different kinds of activities and its collection process.

Iteration 2:

Iteration 2: A workflow to configure the collection form.

Value & design: This iteration provided users with a workflow add details about student activities and collect its results.

Insights: This came out to be very long and complicated process for the user. They preferred their inefficient processes in comparison to this complicated workflow. That led to the third iteration.

Iteration 3:

Iteration 3: Visualization for the results collected.

Value & design: In this iteration, a user got different options of collecting student results. Depending on the practices of a university, users were able to choose between multiple collection methods. It also provided program leads with a visualization of the collected data.

Insights: This iteration received a lot of appreciation from users. It was taken as the right direction for this product.

Testing several low fidelity mock ups helped me learn about my user’s needs. It helped me find ways to accommodate different user practices while addressing their specific needs.

Previewing before publishing

Selecting data collection format

Setting the scoring scale

Testing before Launch

Guerilla Testing

This was used to test individual interactions & initial concepts with SME(Subject Matter Experts). Some screenshots of the prototype are shown below:

Alpha Test - Internal Test with SME

This was an unmoderated testing of the entire product conducted with watermark employees (20). Objectives were decided by the entire product team and results from these test were analyzed, documented and shared with all the stakeholders. Some of the findings were:

  1. User felt lost without proper indications for next steps.

  2. Lack of affordance made some links seem non-interactive.

  3. System language: users needed more guidance to create useful data(help text & instructions) .

I conducted another round of user tests to validate the implemented solutions.

Beta Test - External Testing

This was the final round of testing.

It was piloted with two internal employees and 15 clients users. These were selected from a group of Knowledge Ambassadors(KA) of the company. Solutions to the problems identified, were discussed in the design team and implemented launching the product.

Final Solution

A tool that helps universities document student activities and collect its results from the faculty. As different universities have different kinds of set up in their program for this process and they are at varying levels of maturity in assessment, their needs differ. We designed a solution to accommodate these varying needs and help facilitate communication between program leads and faculty members.

Lastly, visualization of the results helped program leads in analyzing the collected data and take better decisions based on them. At the launch of the product, this tool was claimed to be a highly intuitive & useful by its users. It was considered clean and simple.

It’s more simple than its ever been. That would eliminate a lot of issues right off the bat. - User

Final Product Screenshots

Basic Measure details & different formats of data collection to accommodate different user needs

Visualization of the results to help analyze results better.

Email sent to the faculty members to make it easy for them to provide assessment data.

Input results for the faculty members

More Product Screens

Design Learnings & Project Take-aways

  1. Iteration is your best friend and trust the design process.

  2. Humans are a creature of habit so always start from their current processes. They are your biggest competition.

Linking student activities(Measures) to university performance parameters was an unchartered territory for several companies. Our tool helped universities link these entities and collect student results for them. It also helped universities take better decisions by showing trends & insights into these results.

This tool provides a competitive edge to the product. This novelty had its own disadvantage, i.e, there was no learning from old mistakes. Everything had to tried and tested. Other challenging part of this project was diverse user needs (each university had their own understanding of assessment and created their practices accordingly). Creating a tool which can serve everyone was pretty hard.

The company went through a lot of organizational/leadership changes while this product was being built. This meant changing requirements and changing timelines. Working through these changing times prompted me to build and test several iterations of this tool. I did it, until it tested really well & got users excited.