top of page

Shifting Left: Research That Moved 27% of Help Desk Calls to Self-Service Across a Global Workforce

Bristol-Myers Squibb | UX Center of Excellence | Lead Researcher and Designer

Before this portal existed, every HR policy question, IT equipment request, and software access case at Bristol-Myers Squibb started the same way: an employee called a help desk or asked a coworker. For a global pharmaceutical company with 32,000 employees across three continents, that dependency was both a cost problem and a productivity drain. By six months after launch, 27% of what employees had previously relied on live support to resolve, they were handling on their own. Post-launch CSAT rose from 37% to 86%.

 

Getting there required a research program that started before a single pixel was designed and continued through post-launch measurement: a 15-stakeholder co-design workshop, international moderated usability testing, large-scale card sorting and tree testing, and a post-launch satisfaction survey spanning the Americas, Europe, and Asia. I led all research on this project and served as the sole designer after the first month.

The Business Problem

BMS's HR and IT support model was fully reactive. Employees called or emailed help desks to get answers to routine questions, open IT or HR cases, or request equipment for new hires. Those who couldn't reach support asked colleagues, spreading inconsistent information across teams. The volume was high, handling costs were significant, and employees in non-US geographies often waited for answers that should have been immediately available.

The product team's goal was to shift left: move a meaningful share of that call volume to a self-service portal where employees could look up policy and how-to information, open and track HR or IT cases, and request equipment without live support involvement. The decision to build had already been made. The research question was what to build, how to organize it, and whether employees across a global workforce with different roles, geographies, and technical comfort levels could actually use it without guidance.

Without research, the team would have built what the IT and HR departments thought employees needed, organized the way those departments organized their own work, and validated by internal stakeholders who already knew the system. The result would have been a portal that served the org chart instead of the employee.

My Role

I joined this project as the lead researcher and UX designer embedded in BMS's User Experience Center of Excellence. I scoped the research program, recruited participants across three continents, facilitated the co-design workshop, designed and iterated the Figma prototypes used in testing, ran all moderated sessions, analyzed all qualitative data, and designed the post-launch satisfaction survey. A second designer collaborated on prototyping for the first month before leaving the company; from that point forward I held both the research and design responsibilities for the full program.

Research Approach

I structured the research in phases because each method was designed to answer a different question, and what one phase revealed changed what the next phase needed to investigate.

Phase 1: Co-Design Workshop

I started with a full-day co-design workshop before any prototypes existed. I brought 15 stakeholders from Sales, R&D, IT, and HR into the room together, not for buy-in, but because these four groups had different definitions of what self-service meant, different assumptions about who owned what content, and competing mental models of how employees thought about getting help. Interviewing them separately would have produced four competing visions with no mechanism for resolving the conflicts between them.

One tension surfaced immediately. "New hire setup" existed as a concept for both departments: IT owned hardware provisioning, HR owned onboarding paperwork and benefits enrollment, and both assumed their piece was the primary use case. The workshop forced a decision: organize it around the employee lifecycle event rather than which department owned each task. "Starting a new employee" became a single entry point in scope for the first release, rather than two separate department-owned sections that managers would have to navigate independently.

The workshop also produced a principled scope reduction. HR nominated payroll and compensation questions as a top call-volume driver, but the group agreed to defer them: answers were employee-specific and couldn't be self-served without integrating with the HRIS system, which was out of scope for the first release. Scoping that out in the workshop prevented the team from building a section that couldn't actually deliver on its promise.

Phase 2: Moderated Usability Testing

Once I had working Figma prototypes for the three core use cases: information lookup, case creation, and equipment ordering. I ran moderated usability testing with participants across the Americas, Europe, and Asia. I chose moderated remote sessions over unmoderated because the task types were complex enough that behavioral observation mattered more than completion rates. I needed to see where employees slowed down, where they formed wrong expectations, and where the portal's language conflicted with how employees in different regions described the same concepts.

I ran eight participants per use case area through UserTesting. The equipment ordering flow produced the most significant and consistent failure patterns.

Managers expected to select a role-based bundle, not configure individual components. The equipment catalog showed individual items but offered no kit or package option. Managers across R&D, Operations, and other functions tried to find a pre-configured setup appropriate for their new hire's role rather than selecting a laptop, monitor, keyboard, and mouse as separate line items. 

Managers could not find the entry point for the task they were trying to do. Participants scanned the homepage for an onboarding-related label and abandoned the task when they found only the equipment catalog. The portal's navigation matched the department's taxonomy, not the manager's mental model.

The ordering form gave no indication it was designed for proxy requests. The form was identical whether ordering for yourself or for someone else, with no upfront branching. Managers ordering on behalf of a new hire spent time trying to determine if they were in the right place before proceeding. Several backed out entirely.

Testing also revealed a regional trust problem that was not an absence of localized content but a failure to surface it. The portal had region-specific policy content, but nothing on the page indicated which region a given document applied to. European participants locating a benefits or leave policy had no way to determine whether they were reading the US version or their local statutory version. Several said they would not act on the information without calling to confirm. The fix was not new content but a labeling system that told employees which region each document applied to before they read it.

"I need the same stuff for every new person I hire. I don't want to pick out accessories one at a time. There should just be a kit for each role."

- Sales Manager

"I would look for something that says 'set up a new employee' or 'onboarding.' I wouldn't go straight to ordering hardware."

- R&D Manager

"I kept thinking this was for if I needed to order something for myself. I wasn't sure where to go to order something for someone else."

- Operations Manager

"This might be the American policy. I don't know if this is what applies to me in Germany." 

- Field Sales Employee

Phase 3: Card Sorting and Tree Testing

Usability testing confirmed where the interaction design needed work. It also surfaced the deeper problem: the portal's navigation was organized around how the departments thought about their own work, not around how employees thought about getting help. I ran two rounds of card sorting to understand employees' natural mental models before committing to a revised structure.

The first round was open card sorting with 20 participants in moderated in-person sessions in the US. I chose moderated open sorting because I wanted to hear participants name their own categories; the labels they invented told me more about their mental models than the sort results alone.

The second round was unmoderated closed card sorting on UserTesting with 100 participants across the full international audience, using the structure that emerged from the first round. This validated whether the employee-generated categories worked at scale and surfaced regional differences in how topics were grouped.

I followed both rounds with two weeks of unmoderated tree testing with 200 participants across the full international audience. Rather than asking employees to sort content, tree testing asked them to find it: navigating the revised category structure without any visual design to guide them. First-click accuracy and task completion rates on the revised structure were measurably higher than what the original department-owned navigation would have produced, confirming that the employee-generated categories were not just intuitive to the people who created them in the card sort, but navigable by a broader global population who had never seen the structure before. It was the validation step that made the IA recommendation defensible to stakeholders who might otherwise have preserved the existing structure out of familiarity.

Key Findings

Three findings from this research program shaped the portal's design more than any other.

Employees organize by what they need to do, not by which department owns it. The card sort produced an employee-generated structure that bore almost no resemblance to the original department-owned navigation. The two top-level sections: "HR Services" and "IT Services" collapsed entirely. Employees didn't think in department terms at the top level; they thought in terms of what they were trying to accomplish. "Request Something," "Get Help With Something," and "Look Up Information" replaced the department headers as the primary navigation layer. The single strongest employee-generated category was "Set Up a New Employee," which grouped laptop ordering, system access requests, badge activation, and benefits enrollment together under an onboarding umbrella. IT owned the hardware items and HR owned the enrollment items; neither department had grouped them together in the original structure. The card sort data, validated at scale across 100 international participants, made that reorganization undeniable.

The equipment ordering flow was designed for the wrong user. The original flow assumed the person ordering equipment was the person receiving it. Managers requesting hardware for incoming employees were proxy requestors acting on behalf of someone who had no system access yet. The form provided no indication it was intended for that scenario, no role-based bundle options that reflected how managers actually thought about provisioning a new hire, and no onboarding-oriented entry point on the homepage. All three failures compounded each other: managers couldn't find the task, couldn't configure it efficiently when they did, and weren't confident they were in the right place throughout. The redesigned flow surfaced "ordering for myself" versus "setting up a new employee" as the first decision in the process, moved equipment ordering under the new hire setup category, and added role-based bundle options as the default selection mechanism.

Employees search by life event; the portal was organized by policy category. When asked to find the parental leave policy in testing, participants typed "having a baby" or "maternity" into the search bar. The portal's content was filed under "Leave of Absence" and "Benefits Administration," terms employees didn't use and didn't recognize as the right destination. "I typed in 'parental leave' and got nothing. I wouldn't have thought to look under 'Leave of Absence' on my own." The card sort confirmed this pattern at scale: participants consistently sorted policy documentation and procedural how-to content into the same category regardless of which department owned them, grouping everything under "answers" or "information." The portal had inherited HR's and IT's internal content taxonomy. Employees had their own, and it was organized around the moments in their work life when they needed help.

Impact

The portal launched to BMS's global employee population across three continents. In the weeks following launch I fielded a post-launch survey to 127 respondents to measure satisfaction and perceived usability.

At six months post-launch, BMS's help desk recorded a 27% reduction in call volume attributable to the categories the portal served. Post-launch CSAT rose from 37% to 86%, a 49-point lift that reflected the difference between a reactive, phone-dependent support experience and a self-service one employees could trust. The UMUX Lite score of 85 placed perceived usability well above typical benchmarks for enterprise internal tools.

The research directly influenced four product decisions: the reorganization of the information architecture away from department-owned categories and toward employee task goals; the redesigned equipment ordering flow with role-based bundles and onboarding-oriented entry points; the addition of regional content labels that made localized policy content visible and trustworthy to international employees; and the deferral of payroll and compensation content pending HRIS integration, which prevented a first release that would have overpromised and underdelivered.

Reflection

Two things I would do differently on a project of this scope.

First, I would run card sorting before prototype development, not after. In this program, usability testing revealed an IA problem that the card sort then had to solve, which meant the prototype that went into testing did not reflect the final navigation structure. Running the card sort earlier would have produced a more accurate prototype and a cleaner usability study with fewer confounds.

Second, holding the researcher and designer roles simultaneously on a program this large creates real tension. When you have designed something, it is harder to observe users struggling with it neutrally. I managed this by having a stakeholder present in each session and by writing tasks that prevented me from offering guidance, but on a project of this scale I would advocate for dedicated research and design separation if resources allowed.

The IA work is what I am most proud of on this project. The combination of moderated open sorting with 20 participants and unmoderated closed sorting with 100 produced a navigation structure grounded in how employees across three continents actually thought about getting help, not in how the organization found it convenient to deliver it. The 100-participant validation round made the recommendation undeniable to stakeholders who would otherwise have deferred to the existing department structure. That is the kind of finding that only scale can produce.

bottom of page