A revolutionary new tool for university performance management

I worked with Watermark Insights to design a powerful new tool for universities to manage their performance. University performance is assessed by student exams, class assignments, course enrollment, and other metrics (also called as measures). The goal was to dramatically simplify their collection of data and empower universities with analytics to improve their performance.

Visualization of results

Problem (Research Insights)

  1. Universities had inconsistent and poorly organized practices for collecting student results from faculty members.

  2. Program leads(users) used several isolated softwares for collecting results, conducting analysis & reporting.

  3. Program leads had a difficult time communicating data collection requirements to the faculty members and following up with them.

  4. There was no easy way for programs at university to keep track of their collection process.

  5. Current systems did not link student results to university performance parameters.

Solution

This tool helps universities document student activities and collect results from faculty for the same. As different universities have different kinds of set up in their programs and are at varying levels of maturity in assessment, they need different kinds of result collection methods. We designed a solution to accommodate these varying needs. The system also helped facilitate communication with faculty members. Lastly, visualization of the results helped program leads to analyze this data and take better decisions.

At the launch of the product, this tool was claimed to be a highly intuitive & useful by its users. It was considered clean and simple.

It’s more simple than its ever been. That would eliminate a lot of issues right off the bat. - User



Timeline

12 months



Team

I was responsible for conducting research and designing this tool. My team composed of a dynamic project owner who provided us with the business requirements and a team of passionate software developers and SDETs.

Project Journey

Interviews

I started this project by reviewing existing research and interviewing the key stakeholders. This was followed by interviewing users. We conducted 30-40 interviews throughout this project and depending on the phase of the project, objective of the interviews were tweaked. This included questions used to form a basic understanding of the system and testing prototypes. I created interview guides with screening questions and recruitment methodologies to guide us through the process. Most of these interviews were conducted via audio/video conferencing tools, while 6-7 of these interviews were conducted in person. I used visual artifacts to help explain some more complex concepts.

Artifacts used for interviewing

Example of an interview guide

Competitive Research

This was a new concept, and did not exist in our competitor world as well. For inspiration, I reviewed softwares based on similar concepts. For example: Campus Lab, AFEIS, E-Lumen, Tk20 and AMS by Taskstream. I looked into LMS (Learning Management Systems) systems like Blackboard and Canvas as well. All these softwares helped me get a better understanding of user’s world and the terminologies/concepts that were being commonly used by them.

Sketching

Next I created user flows and shared it with some stakeholders to feedback. Once validated, I started sketching wireframes to visualize ideas. There were several iterations of these throughout the project and they were tested with users and subject matter experts (SMEs). The feedback from one iteration was used as input for the next iteration.

Each iteration helped us define the requirements for our product better, and improve on the user experience.

Iteration 1

Iteration 1: A basic form to fill measure details

Value & design: In this iteration, the tool provided the user with a form to document measure details. The form consisted of all the fields which were thought to be required to define a basic student activity (Measure).

Insights:

  1. This design tested our basic understanding of a Measure and terminologies used to define it.

  2. Interviews with Knowledge Experts revealed that users wanted to document more details about the activity and a functionality for collect student results. This was too basic for all the different kinds of activities and its collection process.

  3. Users need to explain their activity more elaborately and they need a process to collect student results and analyze them. This led to our next iteration.

Iteration 2:

Iteration 2: A workflow to configure the collection form.

Value & design: This design provided user with a workflow for the to provide details on the student activities and collect its results.

Insights: It revealed that users have different needs for collecting results depending on the university set up and its maturity in assessment. This led to the third iteration of this tool.

Iteration 3:

Iteration 3: Visualization for the results collected.

Value & design: In this iteration, a user had several ways of collecting student results. Depending on how the university is set up and the kind of activity is it, user could choose between multiple collection methods. It also provided program leads with a visualization of data analysis, collected from faculty members.

Insights: This iteration received a lot of appreciation from users during testing. It was decided as the direction to built this tool. It was broken down into smaller version to be built iteratively.

Testing several low fidelity mock ups helped me learn about my user’s needs. It helped me find ways to accommodate different university practices and address specific user needs. I tested these iterations with a diverse user base and that helped me find its generic applicability & exceptions.

Testing before Launch

I tested the tool three times before launch and each test was analyzing a different part of the system. I used different methods of testing for each round.

Guerilla Testing

This kind of test was used to test individual interactions & initial concepts with SME(Subject Matter Experts). SMEs are watermark employees who were in constant contact with users on training them on existing company products. Some screenshots of the prototype are shown below:

Selecting data collection format

Setting the scoring scale

Previewing before publishing

Alpha Test - Internal Test with SME

This was unmoderated testing for the entire product, done with watermark employees (20). Objectives of this test were decided with the entire product team. i.e, teams responsible for different tools in product. The results from these test were analyzed, documented and shared with all stakeholders. Some of the findings were:

  1. User felt lost without proper indications for next steps.

  2. Lack of affordance made some links seem non-interactive.

  3. System language: users needed more guidance to create useful data(help text & instructions) .

Each designer analyzed the alpha test results for their domain identified problems. Solutions were discussed with the UX team at large to create a shared understanding of the test finding and create standardized design patterns. I conducted another round of user tests to validate the proposed solutions.

Beta Test - External Testing

This was the final round of testing. It was piloted with two internal employees and 15 users. Users were selected from the Knowledge Ambassadors(KA) of the company. These were moderated tests and conducted in conjunction with a senior designer. I worked with other designers to create the test plan and questions. Solution to the problems identified were discussed with the design team and implemented before the product was released.

Final Product Screenshots

Measure activity & its results format

Visualization of the results collected for a measure

An example of the email sent to the faculty members to collect results.

Input screen for the faculty members to provide student scores

Project Take-aways

Linking student activities aka Measures to university performance parameters was an unchartered territory for several companies. Our tool helped universities link these two entities and collect student results. It also helped universities take better decisions by showing trends & insights into these results.

This tool provides a competitive edge to the product. This novelty had its own disadvantage, i.e, there was no learning from old mistakes. Everything had to tried and tested. Other challenging part of this project was diverse user needs (each university had their own understanding of assessment and created their practices accordingly). Creating a tool which can serve everyone was pretty hard.

The company went through a lot of organizational/leadership changes while this product was being built. This meant changing requirements and changing timelines. Working through these changing times prompted me to build and test several iterations of this tool. I did it, until it tested really well & got users excited.