2025
From ignored to essential

Transforming Workday Search into something people actually trust and love
In 2025, Workday launched its new Generative AI capabilities, with Enhanced AI Search positioned as a centrepiece innovation.



But behind the launch, the experience was struggling. Users didn’t trust it, didn’t understand it, and often avoided it altogether.
As the AI UX lead on this high-impact initiative, I led the transformation of Workday’s search from a fragmented, low-performing system into a clear, human-centred experience people could rely on.
I introduced a structured AI framework grounded in user intent and real behaviour, helping people quickly understand, decide, and take action without needing to figure out the system.
Following the launch of these improvements, click-through rates increased from 7.5% to over 30% within four months. ML latency also dropped from over 8 seconds to under 1 second, significantly improving responsiveness.
This established a scalable foundation for an AI search experience now serving over 70 million users.


How we started
My Role and Goals
As the UX Lead (and a member of Workday’s AI UX Council), my role was:
Execution:
Identifying why users struggled with Workday search through research and data analysis
Designing high-impact UX improvements within a two-month timeframe for a critical AI-powered feature upcoming release scheduled.
System & Scale:
Creating reusable UX patterns and components for Findability AI, supporting AI typeahead, ranking, and generative summaries
Establishing interaction models for how users engage with AI when searching and completing tasks in Workday
Contributing to Workday’s AI UX guidelines to ensure consistency across teams building AI-powered experiences.
Defining a clear, user-centred, multi-year AI transformation vision for Workday Search in partnership with product and engineering.
In partnership with a UX researcher, I led a baseline research effort to understand Workday users’ search needs and pain points.
At the same time, we were working within significant technical constraints that shaped what was possible.
The System Constraints We Had to Work Within
System <> User needs mismatch
Legacy foundation → UX challenges
Workday is modernizing fast, but the foundation limits what’s possible.
Workday was built for HR experts, but over 80% of its users were not.
Workday began as a highly secure HRIS (human resource information system), designed for HR professionals, full of specialized terminology and complex processes.
As Workday scaled to over 70 million users worldwide, more than 80% of them were not HR experts. Who don’t speak the "Workday language" became the primary user base.
This mismatch between the system’s foundation and its modern user base sits at the heart of many UX challenges we faced, especially after I moved to the Search team.


Even CEO said “Search is the most broken part of workday”
83% of employees and managers told us they rarely or never used Search.
Our survey confirmed: 5 out of 6 employees and managers told us they rarely or never used Search.
To understand the root causes behind low adoption, we conducted research over a six-week period, including:
11 user interviews
Behavioural analytics on search interactions (clicks, toggles, and task success rates, speed to enter task)
Data analysis comparing AI-powered results with traditional search results, focusing on result quality and user data.
As we progress, it became clear why our users weren’t happy:

Old Workday Search (- 2024)

Avoidance
HR’s “thing” isn’t for most of the people
“I need to change my name in Workday… but I don’t even know where to start.”
“I want to know my time off policy before requesting time off, but I don’t even know how to ask that in Workday search”
Workday = not for fun
“I just want to get in and get out and back to my work as quick as I can!”

Disappointment
'People' always shows first → That's not always being searched.
“Why I always see “People” first?”
“Don’t know how they're even relevant”
Ambiguous results without clear context → Users struggle to pick the right option
“Search has never been helpful”
“Search slows me down”
“Don’t know how these results are relevant to my search.”
Lack of trust
A few bad experiences is enough to break trust in Workday search.
“I don’t use search because it slows me down”
“Search is my very last resort”
“I search directly on Google when I’m not sure what to do in Workday since it is much easier and actually gives me better answers.
“I wish Workday search works like Google”

When users did try searching, they got disappointed.
Even simple, everyday phrases like ‘vacation request’ didn’t return results, because the search wasn’t forgiving — it only matched exact Workday terminology.
So users felt like they had to guess the ‘right Workday words’ to find anything.
And even when users did remember the correct Workday magic word — like ‘time off’ — the results still didn’t feel helpful.
Search categories were locked into a fixed order, so ‘People’ always appeared first — even when that wasn’t what users were looking for. And when results did appear, they often lacked context.
Users didn’t understand why something was showing up or how it related to their question, so choosing the right result relied entirely on their memory, trial and error, not on recognition or system support.
After these few bad experiences, trust breaks.
Our research showed that users try Workday Search once or twice — but if they don’t find what they need immediately, they stop using it.
Many go straight to Google instead, because it feels faster, clearer, and more predictable.
For a surprising number of users, or maybe as expected, Google actually became the starting point of their Workday journey — before they even logged in.

Learn from what people love
“Search” = Google
Users weren’t struggling because search lacked results.They were struggling because the system didn’t match how they think when they search.
Almost everyone we interviewed directly compared Workday Search to Google — they expected it to be just as easy and intuitive.
This aligns with Jakob’s Law: users want new experiences to behave like the ones they already know.
What actually makes Google so good?
So I studied Google’s UX principles — not to copy Google, but to understand how they design for intent, clarity, and trust.


search intents simplified
know, go, or do?
One idea stood out right away — the Know, Go, Do framework.
It’s a simple but powerful model that simplifies user search intent into only three patterns. When users search, they're typically thinking, I want to Know something, I want to Go somewhere, or I want to Do something.
Understanding search this way helped me visualize user intent more clearly, and it matched how I naturally approach problems — visually and systematically.
It also gave me a chance to rethink Workday Search from the ground up, starting with its UX instead of its technical limits.


With Know, Go, Do
Defining the Findability AI System
I want to
know
Guidance
Needs
Efficiency
Needs
Action
Needs
I want to
go
Efficiency
Needs
Guidance Needs
Action
Needs
I want to
do
Action
Needs
Guidance Needs
Efficiency
Needs
As the Findability AI Lead, my role was to define how Workday search should behave as we introduced an AI layer into the experience.
I partnered closely with five product teams, along with engineering, research, and content strategy, to extend the Know–Go–Do framework across three core surfaces:
AI-powered typeahead
AI-informed ranking and search results
Generative summaries
I defined the behavioural model and translated it into interaction design, prototypes, and production-ready UI across these surfaces.
At the same time, I partnered with product and engineering to shape a multi-year Findability vision — mapping how search would evolve across Now, Next, and Future, while keeping the behavioural foundation consistent.
Know–Go–Do became the structural backbone of the experience. Even as AI capabilities evolved, the underlying logic remained stable.
This consistency allowed us to introduce new functionality without sacrificing clarity, enabling the system to scale over time while remaining predictable and easy to use.
But a framework only matters if it can be applied consistently across real workflows.

Phase — 1
Identify High-Value Manager Workflows
Mgr JTBD: Find the right candidate for my team so I can help support its growth and needs.
“Hiring”
Dive into metrics
Top 10 Tasks
*this is not real workday JTBD
To operationalize this at scale, I started with behavioural data.
We identified the highest-impact use cases across Workday.
Working with PM and UX Research, we analyzed behavioural metrics and surfaced the top 10 most-used manager workflows. I’ll use Hiring — one of the highest-frequency manager activities as an example to show how this works in practice.
These workflows became the baseline for applying the intent model.
From there, we mapped each workflow through the Know–Go–Do lens to define how the system should respond to different types of intent.
Phase — 2
Subject matter experts interview world tour
Mgr JTBD: Find the right candidate for my team so I can help support its growth and needs.
“Hiring”
*this is not real workday JTBD
I want to
know
I want to
go
I want to
do (goal)
Start Job Requisition
Next, I partnered with designers who owned each workflow to map the four components of the Know–Go–Do model.
For Hiring, I learned that the most common starting point is Start Job Requisition, so this becomes the primary Do action we anchor to.
Next, we define what information managers need to act confidently — the Know layer.
For Hiring, that includes process guidance, open requisitions, and historical context.
For example, managers often review previous requisitions before creating a new one. That becomes a key Know element surfaced directly in the summary.
Mgr JTBD: Find the right candidate for my team so I can help support its growth and needs.
“Hiring”
*this is not real workday JTBD
I want to
know
how to start the hiring process
current open requisitions
past or similar openings
contextual guidance (kb)
I want to
go
Manager Insight Hub
Job Requisition Dashboard
Hiring documentation
do (goal)
Start Job Requisition
+ data sources
usage patterns
knowledge content
dashboards
The Go layer captured the navigation hubs and dashboards that managers relied on most.
These destinations also served as structured data sources, helping us assemble reliable and context-aware summaries.
Finally, we mapped the system’s data sources — recruiting services, behavioural signals, knowledge content, and dashboards — so AI responses were grounded in trusted inputs.
Phase — 3
Source of truth for cross functional use

I want to
know
I want to
go
I want to
do
From there, I captured the four key components in Airtable.
We used it as a shared source of truth for intent modelling.
This allowed us to compare Know, Go, and Do combinations, identify structural gaps, and align cross-functional teams around a single behavioural framework.


Findability for Workday Illuminate AI
Making Know Go Do for “HR thing” users
Workday’s Know-Go-Do Model Example
guidance needs
Know
Such as:
“How much time off do I have left this year?”
“How much tax did I contribute last month?”
“How do I start hiring?”
For Know Intent, system provides:
AI summary surfaces clear answers and supporting data that help users move toward “Do” (Action).
With the Glean AI integration, users can now ask follow-up questions right from the summary for deeper insight.
action needs
Do
Such as:
“Take next Friday off.”
“Submit my expense report.”
For Do Intent, system provides:
When the system detects a Do intent, the AI summary becomes the task entry point. Such as opens a date selector pre-filled with your time off balance, you just confirm, and the action completes.
Every element is designed to shorten the path from intent to “done”.
Efficiency needs
Go
Such as:
“inventory report#3657”
“Case ID 363”
“Candidate Henry Lam”
For Go Intent, system provides:
No summary appears.
The focus shifts to fast, relevant Typeahead suggestions or clearly listed search results so that users can quickly find exactly what they need.
Anything else is just noise.
Here’s a high-level view of the new Workday Know–Go–Do model I created.
For Know needs, the goal is proactive clarity.
In typeahead, we surface an AI-powered summary as the top result. On the search page, we present a richer, personalized generative summary — giving users immediate context while still supporting a smooth handoff into action.
For Do needs, the goal is frictionless execution.
When intent is clear, AI prompts the right task entry point directly — or in some cases completes the task immediately — reducing unnecessary steps.
For Go needs, the goal is speed.
Users already know where they want to navigate. Anything extra becomes noise. So instead of summaries, the system focuses on fast, highly relevant results — personalized by usage, role, location, and metadata — helping users reach their destination with minimal effort.
To effectively meet both user needs for efficiency and contextual guidance, we envisioned a single, unified engine that could provide helpful summaries exactly when and how users needed them—not just in search, but across multiple touch points.
Before

After

Five independent search engines operated in isolation, returning inconsistent results based on different definitions of relevance.
A single, unified engine designed to deliver relevant,
helpful content consistently across multiple touchpoints.
Note: Details simplified to protect confidential information.
To effectively meet both user needs for efficiency and contextual guidance, we envisioned a single, unified engine that could provide helpful summaries exactly when and how users needed them—not just in search, but across multiple touch points.

Unified Framework
helps to move quick adding clarity

Even when I was asked to rescue our sister team’s AI feature, I relied on the same Know–Go–Do foundation.
The team had been experimenting with Retrieval AI, but click-through was only 7.5%, and ML latency was over 8 seconds. With the fall release approaching, leadership was concerned. I was brought in to diagnose the issue and improve the experience — and we had just three weeks.
Very quickly, I realized the problem wasn’t only technical.
The UI itself was breaking the Know–Go–Do model it was supposed to follow. The system was actually returning relevant results — but the card design made it unclear whether users were supposed to navigate somewhere or take action directly.
The system wasn’t wrong.
The interface was misrepresenting the intent.

2
1
3
1
2
3
1
More Content, More Choices
2
AI “Suggests”, Not “Decide”
3
Bring back blue links
1
Banner blindness
2
AI makes decisions on behalf of users
3
Buttons Slowed Users Down Lacking Context & Clarity
The backend for this feature was Retrieval AI — meaning the system could only return links. So the items inside the card were actually Go items: search destinations.
But they were being styled as Do actions, using buttons.
Because everything was turned into a button — sometimes even long, multi-line buttons — users couldn’t tell whether they were navigating somewhere or initiating an action. It created immediate confusion about what the system expected them to do.
This blurred intent and confused users.
Using Know–Go–Do as our north star, I corrected the intent mismatch, in parallel to ML model update.
I turned those items back into blue links — the proper “Go” treatment — so users could instantly recognize them as navigational results rather than task actions.
Once visual treatment matched intent, the experience aligned.
After this fix, first-click task success increased by about 80 percent, and nearly 90 percent of users understood the Top Results as helpful system suggestions — exactly what we wanted.
And when we released it to EA customers, CTR jumped more than four times, from 7.5 percent to over 30 percent, and ML latency dropped from 8s to less than 1s, in 4 months.


Unified Framework
Designing AI search to help get “workday things” done fast
How much was deducted from the pay?



“take time off”

“take next Friday off”








These are some of the key components I delivered as part of the new Workday AI Search experience.
I designed the experience end-to-end — bringing together generative summaries, AI-powered typeahead, intent-driven ranking, intelligent filters, and contextual task entry points into one unified system.
I designed this under the same goal since the hackathon. to help 70 million users get their “Workday things” done faster.

meet New Workday AI Search
Actions in-context, Just straight answers


1
2
3
4
1
understands me
The system understands me, even if I don’t know Workday’s jargon.
2
Anticipates My Needs
The system shows me a summary card with clear next steps.
3
Personalizes For Me
It feels tailored specifically to my needs and my workflow.
4
Makes things Easier
The experience is organized, easy to understand, and saves me time.
My work concluded with delivering a new Workday Search experience — grounded in a behavioural framework that began as a small experiment, and now deeply embedded in Workday’s operating model.
This vision gave structure to fragmented systems. It unified backend engines, clarified ownership across teams, and created strong shared language across product, design, and ML.
It also established a stable foundation for AI to evolve responsibly.
Workday is complex. Users don’t come here for exploration or always exiting reasons. They come to complete tasks.
So I just wanted to make search genuinely helpful by listening to the pain points of the HR thing users.
So the mission stayed simple:


diginomica.com (December 13, 2024) : Transforming the enterprise user experience with AI by Katie Holden (VP of AI)
“Get Workday things done fast.”
That clarity became the anchor for every decision — not adding features for novelty, but defining clear behavioural rules that made the system more predictable and useful.
When AI was introduced, I designed the experience to follow that same principle. AI didn’t change the mission — it strengthened it.
Search at Workday evolved from simply retrieving links to guiding people toward the right action — and helping them complete tasks with confidence.
Working under pressure strengthened my discipline around clear, evidence-based communication — and lots of metaphors - especially when influencing cross-functional stakeholders.
What stayed with me most from this experience is:
in complex systems, behavioural clarity scales.
When behaviour is predictable, trust increases — and adoption follows.
search now has a clear mission
“Get workday things done fast”
36s → 8s
Average search time reduced 4x
92%
New search Adoption
14%+
Improved Findability in 3 months
30%+
Time Saved for Manager Approvals
8s → <1s
ML Latency dropped
7.5% → ~30%+
Top Results CTR (EA)
8% → 22%
Search Filter Usage Increased

