Mixed Methods Senior UX Researcher
ProQuotes: Closing the Conversion Gap for Home Depot's Pro Members
The Problem
Pros (contractors, remodelers, facility managers, and specialty trades) are not incidental customers at Home Depot. They represent nearly half of the company's $159.5 billion in 2024 sales and more than 15 million members in the ProXtra loyalty program. When analytics showed that a significant volume of Pro quotes were being created but not converting to purchases, understanding why was not a product polish question. It was a business-critical one.
The design team had already responded with a redesigned quoting interface and needed it to be validated before a broader rollout. The product team had a different question: what were competitors doing in the B2B product quoting space, and what features belonged on the roadmap? Both teams came to the kickoff meeting with one research request. I left with two study plans, but not without a negotiation first.
Both teams agreed the scope needed to split. The harder conversation was sequencing. Design wanted their prototype validated first so there was time for a second round of testing if the designs needed significant changes before the quarterly planning meeting. Product wanted competitive intelligence first to frame the design feedback in a competitive context. Both priorities were legitimate.
I brokered the sequencing by mapping the dependency structure. The design validation had a harder deadline constraint: if the prototype testing revealed problems that required a redesign, the team needed maximum time before the planning meeting to act on them. The competitive study had no equivalent downstream dependency. Its outputs fed the roadmap, not the prototype. If usability testing went first and revealed design problems, the team still had time to act on them. If competitive walkthroughs went first and pushed usability testing later, we would lose the window to iterate before the planning meeting. The risk only ran in one direction. Both teams agreed once I framed it that way.
The case for two separate studies was straightforward once framed correctly. Design needed to understand whether Pros could complete tasks and why they struggled. Product needed to understand what features Pros valued in competitor systems and why. These were different questions, different participant tasks, and different analytical frameworks. Combining them into one study would have forced participants to split their attention between evaluating a prototype and narrating a competitor workflow, compromising the validity of both. Separating them meant each team got findings that were unambiguous about what they were measuring.
My Role:
Research Lead and sole researcher. I owned study design, recruitment, moderation, synthesis, and findings delivery for both studies. I ran collaborative prioritization workshops with design and product after each phase to translate findings into roadmap decisions.
Phase 1:
Usability Testing
Why this method: Analytics on the original design showed a significant volume of quotes being created but not converting to purchases. The design team had responded with a redesign, but the abandonment data couldn't tell us whether the new design solved the friction or simply moved it. Moderated testing with a think-aloud protocol let me observe decision-making in real time and probe the reasoning behind actions the prototype alone couldn't explain.
I recruited eight participants across four Pro segments: renovation remodelers, handypeople, specialty trades, and facility managers. Within each pair, one participant tested on desktop and one on mobile. I recruited through our own channels after access to high-value Pro accounts was blocked by account management stakeholders who were concerned about protecting strategic client relationships. That constraint shaped the participant pool and is worth noting as a study limitation.
The lead designer attended every session and took notes. I ran daily debriefs after each session block. Product attended some debriefs but not sessions. That difference in proximity to the data mattered in how each team received the findings.
Phase 2:
Competitive Cognitive Walkthroughs
Why this method: The product team needed to understand what competitors offered so they could evaluate roadmap priorities, but a standard competitive audit would not have told us what Pros actually experienced when using those systems. I recruited the same Pro segment types and asked them to walk through quoting a real job in Lowe's, Grainger, and Ferguson, with Home Depot included as a control condition.
This method was more resource-intensive than a desk-based competitive audit, but it surfaced something a desk audit couldn't: how real workflow interruptions, familiarity gaps, and cognitive load show up when Pros are actually in a competitor interface. The participants narrated their expectations, their frustrations, and the moments where one system clearly outperformed another.
The recruiting obstacle from Study 1 appeared again here. Additionally, some participants who had passed screening declined to share their actual company account data during the walkthrough, even though the screener had flagged this requirement. I replaced those participants mid-recruitment. The additional time cost approximately four days and compressed the synthesis timeline.

What I Found
Finding 1:
The 7-day expiration window works against how Pros actually sell
Across multiple participant segments, the quote expiration policy surfaced as the single most consistent source of friction. Pro sales cycles routinely run 30 to 90 days. Customers take weeks to respond, management approval can add delays, and deposits or contract signatures often come long after a quote is created. The system's 7-day window meant Pros were recreating quotes multiple times before a job was confirmed, adding labor, introducing pricing inconsistencies, and eroding trust in the system as a professional tool.
The policy change this required was not a design decision. It was a business policy decision. Surfacing it as a research finding and framing it in terms of customer labor cost and job completion rates is what moved it from a complaint into a roadmap item.
"Seven days is nothing. My customers take weeks just to call me back. By the time they say yes, I'm starting from scratch all over again."
- Handyperson participant
"I don't understand why they disappear so fast. I should be able to keep a quote until I decide I don't need it anymore, not until Home Depot decides for me."
- Renovation remodeler participant
Finding 2:
Sharing a quote should be table stakes, not a feature request
The competitive cognitive walkthroughs made this finding unavoidable. When participants in Grainger and Ferguson finished creating a quote, their first instinct was to share it: with a customer, a foreman, or a supervisor. In Grainger, emailing a quote from within the system was standard. In Ferguson, a notification email was built into the workflow as an automatic step. Home Depot had neither.
Participants using the Home Depot control condition described workarounds that would be familiar to anyone who has studied B2B tool friction: exporting to QuickBooks, screenshotting quotes before expiration, emailing themselves links to share manually. These were not edge cases. They were load-bearing workarounds that Pros had built into their process because the tool required it.
The competitive walkthrough surfaced sharing not as a differentiating feature but as a basic expectation Pros had from any quoting system they used. That framing (table stakes, not enhancement) changed how the product team prioritized it.
"I just forward the quote link to my customer and they can see it. I figured Home Depot would do the same thing."
- Grainger walkthrough participant
"I had to screenshot it and text it to my foreman. There's no way to just send it to someone."
- Home Depot control participant
Finding 3:
"Refresh" is not a shared mental model
The refresh button created genuine confusion. Participants held three distinct interpretations: updating the price on an existing quote, creating a reusable copy for a different job, and generating a new quote entirely. The alternatives they proposed ("Reactivate," "Repurchase," "Duplicate") mapped to those different mental models and suggested that the single label was doing too much semantic work.
The task success rate was near-universal, with only one participant failing to complete it, which means the problem is entirely in the language, not the interaction. Fixing it required only a label change and a supporting description, a low-effort, high-clarity improvement the team made quickly.
"I wasn't sure if refresh meant it was going to update the prices or just make a copy for a new job. I would have called it something like 'reactivate' or 'repurchase.'"
- Roofer participant
Finding 4:
Fulfillment visibility is a competitive differentiation opportunity
Across the competitor walkthroughs, one pattern appeared in every system but was handled better in some than others: Pros wanted to know whether items were in stock and when they would arrive before committing to a quote, not after. In Lowe's, delivery and pickup items were split into separate visual sections within the quote, making fulfillment mode immediately readable. In Grainger, lead time appeared at the item level. In every system including Home Depot, participants described the same frustration: discovering availability or timing issues late in the process disrupted project schedules and eroded confidence in the quoting tool.
This gap appeared across all four systems, which meant it represented an opportunity to lead rather than simply close a competitive deficit.
"I need to know if something is going to take three weeks before I put it on the quote, not after. By then I've already promised my customer a timeline."
- Builder participant
"Sometimes I add something and then find out it's out of stock. At that point I have to go find an alternative and it slows everything down."
- Renovation Remodeler participant
What Changed
After each study, I presented findings to design and product in a structured readout, then facilitated a Now/Next/Later/Deprioritize workshop to get alignment on implementation sequencing. The workshops were not optional. They served two functions: building shared ownership of the priorities and surfacing technical feasibility constraints before they became post-design surprises.
Within the near-term roadmap, the share feature was implemented as quote-sharing by email. Sharing by text was deprioritized during the workshop when engineering raised implementation complexity, a constraint that would not have surfaced without the structured prioritization session. The refresh button received an updated label and a short supporting description that resolved the three-way mental model conflict. Quote expiration messaging was redesigned to make the deadline visible and actionable earlier in the workflow.
Larger changes (fulfillment visibility at the item level, more robust quote reuse, and expanded scannability options) moved into the roadmap for future quarters.
Six weeks post-launch, CSAT moved from 63 to 74. SUS moved from 65 to 78.
What I Learned
Two things I would change about the study design.
First, I would increase the participant count in the cognitive walkthroughs from two per system to four, with two on desktop and two on mobile per system. The current design gave each system one desktop user and one mobile user. For a tool used heavily on mobile by field-based Pros, that ratio was too thin to draw reliable conclusions about device-specific friction. The findings were directionally sound, but the quantitative confidence behind any mobile-specific observations was limited.
Second, I would push harder at kickoff to confirm account access before finalizing the recruitment timeline. The blocked access to high-value accounts and the mid-study participant replacements in the cognitive walkthroughs cost time and introduced sampling risk. Both could have been surfaced and negotiated earlier if I had explicitly mapped the access dependencies before finalizing the study plan with stakeholders.
The scoping decision (separating one ambiguous brief into two defined studies) held up well. Neither team felt their question was crowded out by the other's. That clarity upstream made the prioritization workshops downstream straightforward.