top of page

eSignature Manager: Embedded Research Across the Full Product Lifecycle

U.S. Bank | Document Management Platform 

Role: Lead and sole embedded UX Researcher

Methods: Discovery Interviews, Concept Testing, Prototype Usability Testing, Branch Field Observations, Live Site Usability Testing, Post-Launch Survey with UMUX-Lite

Business Context

When U.S. Bank retired its physical signature pads, it removed the device every branch banker used to capture legally required client signatures during account openings and maintenance transactions. There was no operational fallback. If the digital replacement failed, bankers could not open new accounts. Existing customers could not add beneficiaries, designate payable-on-death recipients, or update their address or phone number. The product replacing those pads was eSignature Manager, a component embedded within the bank's broader account management platform that launched automatically when a banker initiated a signing event from their primary system.

My Role

As the sole embedded researcher on the Document Management Platform, I was the single point of research contact for both the product and design teams. The six phases in this case study represent approximately six months of active work spread across two years. Between phases, I continued research for other teams and products on the platform, returning to eSignature Manager when design and development milestones created the next research need. Two findings shaped the product more than any other: one that bankers raised in every phase of testing and required a stakeholder reframe to resolve, and one that no amount of remote research would have caught.

Negotiating Researcher Access Before Research Could Begin

U.S. Bank's banking operations function controlled researcher access to branch staff and had no established pathway for allowing it. The default posture was protective: bankers were busy, and the concern was that research contact would become a recurring burden.

 

Before the first study launched, I negotiated a formal access protocol with operations leadership. All researcher contact with bankers would route through district managers. No district manager could be approached for additional studies within six months of a prior contact. Every request was logged in a centralized tracking system I maintained. The protocol added recruitment planning overhead to every phase of this program but produced infrastructure that outlasted it: ten researchers eventually used the model to run thirty research projects involving branch bankers, a research function that had not previously existed at the organization.

Discovery: Understanding the Problem Before Design Began

The physical signature pads U.S. Bank had relied on for years were being retired because the hardware manufacturer had ended support for the model. There was no operational data pointing to a problem with how bankers used them. The problem was that the hardware itself was going away and something had to replace it. Before any design work began, I needed to understand how the signing process actually worked in banker practice, what broke it, and what bankers were anxious about when imagining a digital replacement.

I chose moderated one-on-one interviews with eight bankers over a survey because the problem space was undefined. A survey would have measured a problem space I had not yet mapped. Participants spanned a mix of tenure levels across two business lines: consumer bankers, who handled single and joint account openings, and business bankers, who typically managed accounts with multiple signers, some of whom were not always present at the same time.

The Topaz signature pad U.S. Bank had used for branch signing ceremonies. Hardware manufacturer support ended, requiring a full digital replacement.

Research Questions:

Question 1

What types of signing ceremonies do consumer and business bankers conduct with clients, and how do those ceremonies differ across business lines?

Question 2

What is the end-to-end journey from the moment a banker launches the existing signature manager to the moment they exit back to the main system?

Question 3

What pain points do bankers experience with the current signature pad process, and where in the journey do they occur?

Findings

The journey followed a consistent structure: banker launches the signature manager, client reviews personal information and disclosures, client signs, the flow repeats for each additional signer, and the banker exits back to the main system. Three themes emerged from the interviews.

The signing moment carried outsized emotional weight. Bankers did not experience the signing step as one task among many. They experienced it as the most client-visible moment of the entire appointment, and they calibrated how the whole interaction had gone based on how smoothly it went. When something went wrong, it was not experienced as a system failure. It was experienced as the banker's competence failing in real time. Senior bankers, who had accumulated more experiences of visible failures in front of clients over longer careers, expressed more frustration with the existing system than newer bankers.

Bankers had limited visibility into what clients were experiencing on the pad. The signature pad displayed text and disclosure content to the client, but bankers could not see what was on the pad display from their side of the desk. They could not tell whether a client was confused, had missed a page, or was reviewing something unexpected without asking directly. Completion was signaled verbally by the client, not confirmed on the banker's screen. Bankers were managing a client interaction they could not observe. Two additional physical constraints compounded this: for clients with fine motor difficulty or who needed reading glasses, the stylus created friction with no established banker protocol for how to assist, and in some branch configurations the pad cable was short enough that it could not reach across the desk comfortably.

 

Partial progress was unrecoverable for individual signers. If a client needed to leave mid-signing, any signer who had only partially completed their portion had to start over on return. This was more frequent in business banking, where accounts with more signers increased the probability that one person might step away before completing their portion of the ceremony.

"When something goes wrong with the signature, the customer looks at me. Not the machine."

- Consumer Banker

Concept and Prototype Testing: Finding Problems Before Engineering Investment

Concept Testing

Concept testing with eight bankers evaluated three alternative design directions for the eSignature Manager dashboard before any approach was committed to design or development. The design team had three competing directions and needed evidence to choose one to build out fully before investing further.

Research Questions

Question 1

Which of three design alternatives best supports bankers in orienting themselves to the new dashboard without prior training?

Question 2

Does the overall signing flow match banker mental models of how a signing ceremony should progress from launch to exit?

Question 3

Which design alternative most clearly communicates signer status and session completion to bankers whose attention is split between the screen and the client?

Bankers completed a joint account opening task across all three directions. Using a within-subjects test design and think aloud protocol.

Direction A

Table Layout with inline actions

Direction B

Grouped card layout: needs action / completed

All signers in a single list. Status and action buttons on each row. 
Banker selects any signer in any order

Signers split into two sections. Unsigned signers shown first. Completed signers collapse into a quieter summary below.

Selected direction. Bankers could orient without training and select signers in any order, matching real branch conditions

Tested better with bankers. Status separation reduced cognitive load and made it immediately clear who still needed to sign. I advocated for this direction. Development constraints prevented it from moving forward.

Direction C

Step by step wizard: One signer at a time

Banker works through each signer sequentially. One signer shown at a time. Completed signers appear in summary below. 

Did not test well. The sequential model assumed all signers arrived together. Bankers need to select any signer independently because in practice, signers often arrive at the branch at different times.

Direction C did not work. The wizard model assumed all signers would be present simultaneously and locked bankers into a sequential flow. In practice, business account signers frequently arrive at different times, making a sequential model incompatible with how branch signing ceremonies actually run.

Direction B produced the strongest results on the primary research criteria, with its grouped card layout making session status readable at a glance. I recommended moving forward with Direction B. The team selected Direction A because development constraints made the grouped card architecture unfeasible within the project timeline. Direction B was documented as the preferred design benchmark for a future iteration.

 

Direction A advanced into the prototype phase. Three research questions guided those sessions:

Figma Prototype Usability Test

Research Questions

Question 1

How do bankers initiate an in-person signing session, and what challenges surface during that process?

Question 2

How do bankers initiate a remote signing session for a signer who is not physically present, and what questions or hesitations emerge?

Question 3

How do bankers complete a multi-signer session and return to the originating system, and where does uncertainty appear?

Research Impact

Four findings drove revisions before any code was written.

The signature status indicator was not readable at a glance. A status that required careful reading was not usable when a client was present and the banker's attention was divided.

 

For multi-signer accounts, bankers hesitated at the Actions column when not all signers were physically present, uncertain whether initiating in-person signing for one signer would conflict with pending remote requests.

After completing all signatures and returning to the dashboard, there was no clear path back to the originating system. 

The "Review Details" button was ambiguous. Bankers avoided it throughout testing, unsure whether clicking it would interrupt the active session.

"I know we're done but I don't know how to get out of here. There's no obvious door."

- Consumer Banker

What Field Research Found That Remote Sessions Could Not

Four full-day branch observations surfaced two findings that no prior phase had reached. Both required action before launch. Each visit combined direct observation of live signing ceremonies with informal conversations with bankers during slower periods, giving me both behavioral evidence of how the flow executed in practice and self-reported perspective on challenges bankers had not raised in structured sessions.

Research Questions

Question 1

How do bankers and clients move through the eSignature Manager signing flow in a real branch environment, and where does friction surface that remote testing did not reveal?

Question 2

Does the physical branch environment support the interaction model as designed, and what environmental constraints affect how bankers execute the signing ceremony?

Question 3

How does the account opening signing ceremony feel to bankers and clients from launch to exit, and where does the experience break down?

Findings

The hardware infrastructure assumption could not be validated in the field. The design team had been told that teller line computers would be upgraded to support client-facing digital signing. When I observed teller stations across four branches, three had physical configurations that made screen rotation to a client impossible regardless of hardware. Space constraints at teller stations prevented monitors from being turned, and some branches had physical barriers between teller and customer that precluded direct touchscreen interaction. U.S. Bank's branch network reflects multiple eras of physical design from decades of acquisitions, and teller station configurations varied significantly across them.

Most account maintenance transactions originate at the teller line. Moving a client to a banker desk to complete digital signing added five to ten minutes to the transaction, required the banker to undock their computer or start a session on a different machine, and was not operationally feasible in smaller or understaffed branches. Paper signing on the teller line remained available but required manual data entry and document storage afterward.

My recommendation was a remote-first default for teller line maintenance transactions: route existing customers completing address, phone, or beneficiary updates through the Send flow built into the product, delivering documents by text or email for signing on the customer's own device. This used a capability already built into eSignature Manager and eliminated the need for physical screen interaction at the teller line. The team adopted this approach.

A pervasive data confirmation failure was eroding client trust. This finding had surfaced in every prior research phase. When a customer's information needed to be updated during a signing session, a phone number, an address, a name, the system locked that data at session launch. Changes made in the originating system did not carry over. Bankers had to delete the entire signing ceremony and start over.

The product team's position across each readout had been that this was an acceptable edge case. I observed the failure directly in a branch. An existing customer opening a joint account needed his phone number updated before signing. The banker made the change in the originating system, launched eSignature Manager, and the outdated number appeared on the disclosure documents the customer was asked to review and confirm. The banker explained that the system was showing old information but that the update had been entered elsewhere. The customer paused, looked at the screen, and asked whether the bank's systems were connected to each other.

That moment reframed how I reported the finding. This was not a workflow inconvenience. The gap surfaced during the most trust-sensitive moment in the client relationship and gave customers direct observational evidence that the bank's systems were not integrated. Bankers were being asked to verbally explain away a visible discrepancy on a document a client was about to sign. 

The reframe moved the decision. A full architectural fix was not feasible before launch. Instead, the team added a modal warning informing bankers that deleting and restarting the signing ceremony was required to reflect updated client information, along with in-context help explaining the system behavior. A smarter exit flow was also recommended: if a banker left eSignature Manager to correct client information, the system would detect the reason, automatically clear the signing ceremony, and surface a prompt explaining what it had done, rather than leaving the banker to discover the restart requirement mid-session.

"I have to tell the customer their information is right in the system, it just isn't showing here. You can see them wondering if we actually know what we're doing."

- Business Banker

Live Site Testing: Catching What Implementation Introduced

Branch observations had confirmed two pre-launch risks. Live site testing addressed a third category: problems the build itself had introduced that did not exist in the prototype. I ran a remote moderated study with eight bankers in a lower environment, using the same core tasks from prototype testing with one addition: bankers were asked how they would correct a customer information error discovered mid-session, directly testing the modal warning added in response to the branch findings.

Research Questions

Question 1

Does the end-to-end signing flow work as intended in the coded experience, and what friction points surface that were not present in the prototype?

Question 2

Can bankers successfully identify and correct a customer information error before completing a signing session, and does the modal warning communicate the required steps clearly?

Question 3

How do the addition of the start e-signature button and the removal of the progress bar affect banker orientation and task completion compared to the prototype?

Findings that drove pre-launch action

The start e-signature button created a predictable point of confusion. The Sign in person and Send buttons were grayed out when a banker first entered the dashboard, with no indication of what step was required to activate them. Before launch, the button was made more visually prominent and instructional text was added adjacent to it establishing it as the required starting point.

Removal of the progress bar left bankers tracking overall session completion through the signer list and status indicators alone. Because both elements had already been flagged as difficult to read at a glance in earlier testing, the two implementation changes compounded each other. The progress bar was reinstated in a subsequent release when development capacity allowed.

When all signers completed signing and bankers returned to the dashboard, the completion state was not visually distinct from a session with outstanding signatures. Before launch, a completion banner replaced the "Awaiting X Signatures" header when all signers reached SIGNED status.

The modal warning about the restart requirement for updated client information was unclear. Bankers read through it without registering its implication and discovered the restart requirement mid-session. The copy was rewritten, tested with three bankers, and shipped before launch. The post-launch survey confirmed bankers understood it.

Post-Launch Outcomes

Six weeks after launch, I fielded a satisfaction survey with 147 respondents stratified by tenure, geography, and business line, including a UMUX-Lite to establish a usability baseline for future measurement.

CSAT improved from 68% to 83%. UMUX-Lite scored 84. Adoption reached 74% at six weeks. Open-text responses surfaced two patterns. The remote Send feature was receiving strong positive feedback from bankers who had previously needed to coordinate with bankers in other states to bring out-of-state business account signers in for in-person signing. Bankers in branches with adequate hardware reported a smooth transition, while those in branches with teller line constraints noted continued reliance on paper signing and the manual processing it required.

Retrospective:
What I'd Do Differently

First, I would negotiate field observation access and travel budget before the research plan is finalized, treating branch visits as a discovery requirement rather than a supplement. A travel budget constraint meant observations were scheduled around an existing Minneapolis trip rather than at the start of the program. That timing produced findings that were still actionable before launch, but they arrived after months of design decisions had been made without them.


Second, I would build a cross-phase evidence log for any finding that recurs across multiple studies. The data confirmation issue appeared in discovery, concept testing, and prototype testing before it was resolved. Presenting it as a pattern with participant counts and phase references would have been more effective than raising it fresh in each readout. The trust reframe ultimately worked, but a documented pattern would have created pressure earlier.

 

Third, I would establish quantitative usability benchmarks earlier in the lifecycle rather than waiting for post-launch measurement. The post-launch survey produced a UMUX-Lite score of 84, but there was no prior baseline to evaluate it against. CSAT had a pre-existing baseline of 68% from the previous system, which made the 15-point improvement meaningful and legible to stakeholders. The UMUX-Lite score stands alone. A benchmark study at prototype or live site testing stage would have made post-launch measurement reflect change rather than just current state, and would have given the team a clearer target going into launch.

bottom of page