Bringing e-discovery to all by simplifying the user experience
Before you ever write a line of code, you have to decide what values your product will implement. Zapproved has a strong tradition of creating easy-to-use, affordable and comprehensive software for corporate legal teams. When developing Digital Discovery Pro, we strive to truly transform in-house e-discovery by mutually representing the user along with these core values.
The goal is to design an effortless, pleasurable e-discovery review experience that fits the unique needs of corporate legal teams. We want our users to get the feeling of “flow” that is the hallmark of true productivity and rock-solid usability.
I came to legal software from the technology side rather than the legal side. My previous employers manufactured sleek, black, cube-shaped computers that led to the iPhone. We created the first offline web browser for mobile devices (previewing modern web applications) and made rack-mounted systems management appliances for a variety of organizations, from a Florida school district to a U.S. government space lab.
There was not a single document review application in the bunch, but a consistent theme: take a complex thing — like Unix, mobile app development or systems administration — and make it accessible to the everyday user. This lowers the training costs for entry-level users and makes experienced users more productive at the same time. Zapproved is inspired by those examples and others, like Google’s pioneering use of a single “search” field and autocomplete, to bring e-discovery to all.
However in a task domain as precise, time-critical and humanly demanding as e-discovery, how do we know we’re doing the right thing before we do it? There are three answers to that question: user testing, user testing and more user testing.
At Zapproved, before we write any code, our staff of user experience experts, led by Jennifer Lyall-Wilson, Ph.D., works with product management to apply a data-driven, scientific method to discover how users perform common tasks. There are many ways we gather this data.
First, we’ll ask users to perform certain actions on the current iteration of the product, like a culling search, on public datasets. This allows us to see what parts of our user experience provide quickly learnable and efficient ways to accomplish a task.
Next, we’ll survey and interview users on how they perform e-discovery tasks, which is an important part of checking for blind spots. One of our most interesting discoveries here has been that many of our users regard our “facets” functionality as a kind of “autocomplete,” which is leading us down some interesting design paths for simplifying searches.
Presenting new designs to our users before we implement them is also an important part of our process. We gather specific data on learnability and efficiency before we write code. We’re currently working on designs for conversation threading, and have been applying insights on how our customers take surveys, task tests and provide feedback to build future iterations of the user experience.
Finally, we perform customer-specific feature flags. A feature flag is a way of hiding a feature from everyone except for the customers that have access. We will often complete a feature and enable it only for particular customers for a couple of our two-week sprints. This is done to run another set of usability tests and answer the key questions: Is it quick? learnable? efficient? Does it create that sense of flow that makes using a product pleasurable? (That last one is admittedly qualitative, but no less important.)
If you are interested in helping us make e-discovery work for everyone in an efficient and pleasant way, contact me and we’ll sign you up for our user testing sessions.