OVERVIEW

The team at GitLab recently introduced GitLab Duo, a complete suite of AI capabilities to power DevSecOps workflows. GitLab Duo's AI features enable our users to write secure code faster, and to enhance productivity by providing helpful explanations and insights into the code. One example of this manifests in harnessing the power of AI to prevent security breaches. The Explain this Vulnerability feature leverages an LLM powered by Google AI to assist in securing applications by:

  • Summarizing detected vulnerabilities

  • Helping developers and security analysts understand the vulnerability and its implications

  • Showing how a vulnerability can be exploited with detailed example code

  • Providing in-depth solutions to the vulnerability

  • Providing suggested mitigation along with sample code tuned toward your project's programming language

ABOUT THE PROJECT

Quite literally overnight, an executive decision was made that our product needs to start incorporating AI functionality; not in a few months, but ASAP. The existing UX roadmap I created with my PM would have to be balanced with iterative releases of this AI feature, with increasing levels of maturity, starting with Experiment, then Beta, and finally, GA.

While I learned what capabilities this AI provided and how we could translate that into a UI, I was simultaneously syncing with eight other designers who were rushing to release AI features for their own categories. While our existing design system didn’t have any existing AI-specific patterns or components, the designers and I met every week, and, in GitLab fashion, collaborated asynchronously every day in between, to make sure we were introducing these features quickly but without sacrificing product consistency or quality.

PROJECT HIGHLIGHTS

  • The Senior Product Manager I work very closely with awarded me with a discretionary bonus based on values of collaboration, results, efficiency, and transparency for the quick turnaround of the AI feature “Explain this vulnerability”.

  • We were able to deliver an MVC (Explain this vulnerability - Experiment) within one milestone, and Explain this vulnerability - Beta within another two. Our GA will be released after I complete another round of solution validation, this time with external customers (required for GA).

  • Of course, with the evolution of LLMs comes many opportunities for us to expand upon our AI features but only if we can also ensure that the quality of the responses and solutions stays above our success criteria ( >85% accuracy and <5% incorrect or misleading responses).

WHO

Senior Product Manager, the Threat Insights engineering team (backend/ frontend/ fullstack), internal vulnerability research team (for prompt testing/ QA), Senior Technical Writer, other designers working simultaneously on other AI features for GitLab (for feedback and collaboration), and myself (Senior Product Designer for Threat Insights, responsible for research and designs for the “Explain this vulnerability” feature).

WHEN

March 2023 - ongoing

THE PROCESS


ASSETS

1. Design: Explain this vulnerability (Experiment)

After a group-level admin manually switches on the AI and ML toggles under “Settings”, the blue info-alert announces the AI feature on any SAST vulnerability. I included a link to a feedback issue in the alert so we could collect early feedback (positive or negative) about the feature, and used a drawer component to populate the AI results. A drawer, as opposed to a modal, allows the users to still view information about the vulnerability on the left of the page for cross-referencing. I’m currently in the process of doing research to see if it would be valuable to collapse or resize the drawer, in case it’s hiding any critical vulnerability info behind it. Note: This is a test project and a test vulnerability, in order to keep GitLab data secure.

 

2. Heuristic Evaluation (required for Beta)

Using our own GitLab heuristics (largely inspired by Nielsen/ Norman), I evaluated the “Explain this vulnerability - Beta” designs before it was implemented in Production. A passing grade of “C” was required in order to mature to Beta, and the original score came out to be a “C”. After reviewing with my team and urging my PM and Technical Writer to, at the least, make some improvements to our documentation, we were able to improve the scores to average as a “B” before it’s Beta release.

 

3. Explain this vulnerability (Beta) - designs

A few differences to note from the “Experiment” to the “Beta” design here: 1) A pre-flight check (security scan) checks for hard coded passwords and warns the user to avoid sending this sensitive info to the AI. It can be overly cautious and report false positives, so we wanted to allow the user to review the code and proceed, if clear of passwords. 2) The user can also now preview the prompt we’re sending to the AI, and decide whether or not to include the source code that the vulnerability was detected in. If they are not comfortable sending the code, they can remove it, and a more general explanation of that vulnerability type will be generated. 3) The AI designers and I decided to associate our AI features with purple, which is used in our marketing design library but not our product design library, to establish a cognitive shortcut and relationship with any AI features across the product. Hence, the “Explain vulnerability” button is now in purple and featuring our new GitLab AI icon (courtesy of a designer on the Foundations team). 4) A feedback collection mechanism now appears at the bottom of the AI results in the drawer, in order to collect feedback and ensure a minimum standard of quality.


4. USER RESEARCH INSIGHTS (PROBLEM & SOLUTION VALIDATION)

I recorded a walkthrough video of my research insights and included a couple of highlights reels of customers talking, because there’s something compelling about hearing it directly from the user’s mouth, and helps to create empathy for our end users. However, due to the participant confidentiality that must stay internal to GitLab team members only, I’ve created a written report that can be shared publicly (and hides the identities of all participants).



5. Explain this vulnerability (GA) - WIREFRAMES (work in progress)

What I’ve learned so far

  • I’ve been working off UX Roadmaps for the past 2 years and have, for the most part, been aware of which projects are in the pipeline, their scope, and what milestone they need to be completed by. This project, however, came out of virtually nowhere, and I’m proud of how the team and I have pivoted and come together to accomplish a lot in a short amount of time, and how quickly we keep learning about the many opportunities that AI presents. While the project is still ongoing, I’m proud that I have proven to be resourceful while working under tight deadlines.

  • AI will constantly be evolving, so it’s important to stay on top of the latest developments, and never really call this feature “complete”. Similarly, we have to continuously monitor the quality of the AI results we’re getting and how we can continue to improve the results by evaluating different models, testing prompts, and keeping the standard of UX extremely high through consistent qual and quant testing.

Next steps

  • Conduct a UX Scorecard (a process involving solution validation of our AI feature).

  • Continue to iterate on designs, sync with engineering and product and other UX designers working on AI.

  • Continue learning about AI and it’s developments and implications across technology, but especially regarding AppSec.