Helping data scientists explore data through ML-assisted workflows.

My contribution: User Research, Product Design, Frontend Prototyping

The team: I worked alongside the director of data science, 7 members of the data science team, and a backend engineer

Year: 2022-23

Plato

Overview

About 50-80% of a data scientists' time is spent on making sense out of large datasets. Exploratory data analysis is a critical process in data science, but this process can be time-consuming and cognitively overwhelming. Over the period of 1 year, I worked with the director of data science to define and implement a layered & trustworthy exploration framework for accelerating data analysis.

My contribution

User Research
Product Design
Frontend Prototyping

The team

1 x product manager
1 x designer & researcher
1 x backend engineer

Year

2022-23

Process

This was a long project, with 2 major design iterations across a period of one year.

The first phase was more exploratory, with my focus being on adding structure to a data scientists' analysis workflow.

In the second phase I worked on addressing issues around the tools' usability, interactivity and transparency.

User Research

Data science was an unfamiliar space for me. To better understand the pain points of my user group, I conducted semi-structured interviews with 5 senior data scientists having an experience of 15+ years.

I was keen to understand their experience with exploratory data analysis and automated solutions.

When you look at a big table with many columns, you often get lost. In the domain of big data, displaying critical information and helping people dissect information layer-by-layer: I think that's most important to me.

The feedback I recieved revolved around a lack of clear direction during analysis, and a lack of interactivity with the existing tools that leverage automation.

I was most surprised to hear a data scientist mention that they'd prefer their time consuming and outdated solutions over the tools that are present today.

Users faced a high amount of cognitive load during analysis.

Design implication: Support users' exploratory workflows through better structure and guidance.

This led to some users brute forcing models to better understand their data.

Design implication: Encourage users to be curious and explore by highlighting strange things in a dataset.

All users agreed that data quality was key to their sensemaking.

Design implication: Visualize the quality of a dataset to accelerate data analysis and sensemaking.

Users were reluctant to shift to newer tools.

Design implication: Reduce the learning curve involved by designing flows that feel natural and familiar to users.

Design : Cognitive Load

I focused the first design phase on reducing the cognitive load placed on users.

Categorizing automated insights


Data analysis is deeply layered and involves inspecting datasets at different granularities. I structured the data sensemaking processs into 3 levels of granularity.

Once I had an information architecture in place, I shifted focus towards designing a layout that allowed users to interact with data and switch intuitively between different granularities.

I brainstormed various layouts in low-fidelity while recieving constant feedback from a data scientist.

Next, I brainstormed concepts to give users a birds eye view of their data. This was in response to user feedback on the dangers of overlooking patterns while studying data.

The goal was to eliminate "cold-starts" and use visual cues to prompt deeper exploration. I chose to focus specifically on visualizing the quality of individual columns in a dataset.

Once users were introduced to their dataset, I designed the subsequent granularity levels to shift control from the system back to the users.

User Research II : Usability Tests & Interviews

Testing my prototype and diving deeper into key research areas...

I started the second round of user research with 5 usability tests that helped me evaluate how well the tool supported a data scientists' exploration workflow.

"What I find great here is that the entry is simple and does not have too much barricades. It gives people a starting point, regardless of how deep their knowledge is."

Users responded positively to the the layout of the tool and insights being presented. However, a large number of users were reluctant to trust automated insights and visualizations that they were not familiar with.

"I need to be comfortable with what you're saying is an error, I need to build some trust.

It could be possible that since I'm a data scientist I like to deal precisely with numbers and see detailed information. Otherwise I'm more suspicious."

I conducted another round of semi-structured user interviews, talking to 4 data scientists about how their trust can be built.

"Depending on the projects, you should also be able to experiment to see what behaves better. It would be very important to have that and certainly would give me confidence."

All users were comfortable with the overview visualization.

Trust scores in automated insights increased by 31%.

50% of the users found the overview visualization ambiguous on first impression.

75% of the users did not immediately trust the overview visualization and insights.

80% of the users wanted to see more quantitative evidence included in automated insights.

All users believed that their trust overlapped with transparency & control.

40% of the users wanted to further optimize the overview visualization.

Users believed that the tool gave them good starting points within datasets.

80% of the users wanted to see more quantitative evidence included in automated insights.

All users believed that their trust overlapped with transparency & control.

40% of the users wanted to further optimize the overview visualization.

Users believed that the tool gave them good starting points within datasets.

Design II : Cognitive Load, Interactivity & Usability

I started the second round of iteration by focusing on optimizing the data visualization.

Some users preferred less colors while working with large datasets, and wanted to visualize all columns without having to scroll.


I turned to Edward Tufte's visualization principles to optimize the overview visualization. To benchmark design concepts, I created a list consisting of 3 vis principles and 3 user requirements.

Initial Concept
Final Concept


Connecting 3 views with a brushing & linking interaction would allow users to visualize parts of the dataset in further granularity. I prototyped the interaction in VegaLite to demonstrate its implementation.

I sketched a storyboard and collected feedback from 4 users. While I recieved positive feedback on the concepts, visualization experts suggested using lesser space for the main visualization and thinking about how users would decipher the column names when 50+ data points are involved.

Designing for edge cases


Next, I brainstormed design updates from the lens of control, trust & cognitive load.

Some of the design ideas



Allowing users alternate ways to explore data :

Defining the process for generating visualizations


During the usability tests I discovered that visualizing relationships between columns was more critical to users' sensemaking process than studying a single column in isolation. For the final granularity level, my goal was to allow users to visualize relationships between columns without having to write code.

User Research III : Usability Tests

Evaluating for control, trust & cognitive load.

Having iterated on the prototype primarily from a usability point of view, I went back to testing with users to guage metrics for control, trust & comfort. To test these metrics, I designed tasks that were borrowed from the journey maps I had created in the previous round of research.

All users were comfortable with the overview visualization.

Trust scores in automated insights increased by 31%.

50% of the users found the overview visualization ambiguous on first impression.

75% of the users did not immediately trust the overview visualization and insights.

80% of the users wanted to see more quantitative evidence included in automated insights.

All users believed that their trust overlapped with transparency & control.

40% of the users wanted to further optimize the overview visualization.

Users believed that the tool gave them good starting points within datasets.

50% of the users found the overview visualization ambiguous on first impression.

75% of the users did not immediately trust the overview visualization and insights.

Design III : Control & Trust

Designing the system to be familiar & trustworthy.

Some users preferred less colors while working with large datasets, and wanted to visualize all columns without having to scroll.

User Research IV : Usability Tests

Evaluating for control, trust & cognitive load.

Having iterated on the prototype primarily from a usability point of view, I went back to testing with users to guage metrics for control, trust & comfort. To test these metrics, I designed tasks that were borrowed from the journey maps I had created in the previous round of research.

All users were comfortable with the overview visualization.

Trust scores in automated insights increased by 31%.

50% of the users found the overview visualization ambiguous on first impression.

75% of the users did not immediately trust the overview visualization and insights.

80% of the users wanted to see more quantitative evidence included in automated insights.

All users believed that their trust overlapped with transparency & control.

40% of the users wanted to further optimize the overview visualization.

Users believed that the tool gave them good starting points within datasets.

All users were comfortable with the overview visualization.

Trust scores in automated insights increased by 31%.

Final Prototype

After 4 iterations, and feedback from 22 users...

An information architecture that allows users to explore data at different granularities using relevant automated insights for each level.

Overview visualizations that give users potential entrypoints into large, complex datasets and interactions to help users narrow down on interesting columns:

Transparent and explainable automated insights, to help users understand system actions under-the-hood:

3 stages of verification to boost user trust in automation:

Reflection

What I learned from this experience...

Out of the various different stages in an ML pipeline, accelerating exploratory data analysis continues to be a challenging problem space. This is largely due to the numerous ways in which a data scientist can approach a problem and choose to conduct his analysis. A lot of modern tools have tried to automate this analysis and in the process left humans out of the loop.

At the end of this project, I propose a human-driven framework, that presents automated insights dynamically as the user drills down on a dataset. The structured organization of the system allows users to ease into a dataset, and not get overwhelmed by the large number of columns. By designing overview visualizations that give users quick ideas about the dataset, the proposed framework gives direction to their exploration and provides them with various entrypoints into a dataset.

There remains more work to be done before modern automation-based tools can be introduced to the workflows of experts. Throughout my research, I’ve learned about the skepticism in userswhile relying on results from processes not designed by them. Simply being transparent about the methods being used was not enough to gain the trust of users. As evidenced by the 3-step flow I designed, systems must accommodate various methods for users to further investigate the accuracy of insights.

Next project