National Center for Construction Education and Research
January 2023

Creating a Digital Evaluation Tool

Expertise

UX Design, UI Design, Design Systems

Platforms

Desktop, Mobile, Tablet

Tools

Figma
Creating a Digital Evaluation Tool

Project Overview

Background

The National Center for Construction Education and Research (NCCER) is a not-for-profit education foundation for professional craft certification. The core of their business revolves around two pillars that learners are tested and evaluated on - knowledge and performance. At the time of this project kick-off, the performance evaluations were always done on paper. Aside from the administrative issues that come with a stack of papers, it is important that NCCER continues to innovate and provide useful tools to their customers. In light of the company's overhanging initiative, dubbed "Simplified", leadership presented the design team with the goal of building a tool that should allow instructors to evaluate students on the fly.

Team Members

Jeremie Montero

Responsibilities

  • Stakeholder Interviews
  • UI Design
  • User Testing
  • Design System Management

Goal

Perform due diligence, ideate, and prioritize the necessary features that instructors will need in order to efficiently evaluate their students with. Rapidly produce a responsive interface that is user-tested and ready for shipment to the development team upon the project deadline.

  1. Observe and Understand Users
  2. Design & Test the Product
  3. Create UI Kit
  4. Ship

Execution

Observe & Understand Users

The first month of this project was spent interviewing NCCER stakeholders and customers, as well as observing them in a live setting. Our goal was to validate the need for a digital tool, as well as find out what features are necessary for it to perform successfully. Jeremie headed to Florida so that she can collect data in the classrooms, as well as conduct customer-oriented focus groups. I stayed back in the comfort of my own home to conduct remote interviews with our stakeholders. Through these efforts, we were able to define the business requirements and mend them with the users' needs.

We used the insight from our collective research in order to generate user stories:

As an evaluator, I want to be able to view an in-progress section so that I can easily access previously started evaluations.
As an instructor, I should be able to easily locate and select a group/class so that I can move forward with evaluating during a class session.
As an instructor, I need the ability to add and evaluate one learner at a time in the case of a make-up session.
As an evaluator, I should be able to visually understand which student I have selected and am evaluating in real-time.
As an evaluator, I need to be able to start and stop evaluations, all while recording the time it takes for my students to complete each task.

Design & Test The Product

After collecting all of the data from a month of research, we developed valuable insight that would help shape the early designs of this new tool. We flew down to Alachua, FL where for 3 days, Jeremie and I facilitated a design workshop at NCCER headquarters. We presented our findings to company stakeholders, including members from leadership, customer service, marketing, engineering, and more.

Workshop Takeaways:

  1. We will be building a web-application that is initially released for desktop and mobile.
  2. There will be two separate web-apps: A Performance Verification tool for experienced craftsmen that can only be used in a one-on-one situation. A Performance Profile tool that is intended for craftsmen early in their career and in need of basic skills. This is suitable for instructors with large classes, and therefore should support the ability to evaluate more than one candidate at once.

We rapidly put together low fidelity mockups within 24 hours, which served as a readily testable MVP. With the layout and features built for version 1, we invited several customers to come into NCCER headquarters to test our new idea.

More User Testing

We conducted two more rounds of usability testing, making updates on components and screens along the way.

Using Dovetail, we were easily able to store, transcribe, and generate crucial insight that would help improve the product as we moved along.

Key Testing Takeaways:
  • Users need a way to quickly navigate from the currently selected task to any other task within a given evaluation. We learned that evaluators will not typically perform an evaluation in linear fashion, and therefore need to be able to jump from each task with ease.
Task Navigation Component
  • The tables on the dashboard need to include more information for each candidate, such as when they were added to the roster and which craft they are assigned to.
Table Component
  • Users do not want to track  'sub-tasks' - smaller tasks/steps that make up a parent task. A new business requirement that was added to the product after the first round of user tests was the need to track these sub-tasks. After implementing them, we quickly learned from our users that this was A. a new work load that they did not expect and B. a cumbersome amount of input, especially for some unique tasks that consisted of 10+ sub-tasks. We took this insight to leadership and were able to convince them to remove this requirement so that we can avoid frustrating our users.
Left: Initial task evaluation design Right: New design

Create UI Kit

While we made improvements to the web-app, I simultaneously collected components that were often used and added them to our UI kit. This Figma file served as the sole source of truth for all finalized components and allowed the design team to quickly iterate and publish updates. Each component went through various approvals from Jeremie and stakeholders.

Outcome

Solution

Over 2 months, our team played ping-pong between user-testing and interface design. This process helped us to effectively deliver a digital evaluation tool that replaces paper and launches NCCER into the modern product era. Contrary to the initial plan, we ended up combining the two web-apps into one, which became the Performance Evaluation Application.This web-application allows instructors to create rosters, easily navigate between tasks, input student results in real time, and much more.

Impact

Through the process of building this new tool, our team influenced more than the design of the product. Using evidence from our research and testing, we were able to become advocates for change within the organization.

  • Standardized Evaluation Process: Initially there was a confusing system of checks and balances, where now evaluators and instructors have the final submission abilities.
  • Simplified Tasks: Using our testing data, we successfully implemented a streamlined task model. The new model allows users to view sub-tasks, rather than requiring direct observation and input.