Scaling Indeed Resume Services 8× through Activation, Monetization, System Design, and Behavioral Design
Snapshot
Product Type
B2C service marketplace embedded in a large-scale job platform ecosystem, 0→1 and 1→N across services
My Role
Lead Designer, end-to-end strategy and execution across 4 service tiers
Timeline
2021
Core Challenge
How do you scale a service product that plateaued not because demand was weak, but because the system wasn't built to convert it — and where fixing one stage keeps breaking constraints in another?
Problem Framing
What Was Actually Broken?
Indeed Resume Services ended Year 1 with ~9,000 users and a 0.5% end-to-end conversion rate. The team's instinct was to treat this as a traffic problem.
The real failure was structural:
  • Four service tiers — Free, $19, $29, $89 — shared a single decision surface that had never been designed to help users navigate between them.
  • Service complexity scaled sharply across tiers, but the experience offered none of that structure. Users arrived with genuine intent and left because they couldn't evaluate their options or build enough trust to act.
This was a decision-quality failure.
Instant resume report
Free, redesigned
Automated resume scan with instant best-practice feedback
Resume review
$19, redesigned
10-minute video review plus notes for DIY improvements
Real-time resume edits
$29, 0→1 new offering
20-minute live video chat with a professional for DIY improvements
Resume writing
$89, internal→external
Professional resume rewrite
Four service tiers
The deeper issue: each stage of the user journey — activation, evaluation, purchase, post-purchase — was operating independently. Improvements in one stage changed constraints in others.
The right metric wasn't conversion rate at any single stage, but throughput: qualified demand × funnel completion × delivery capacity.
Activation
Behavioral timing
(Entry points)
Evaluation
Decision support
(Homepage)
Purchase
Information quality + trust
(Wizard funnel)
Post-purchase
Behavior change
(Results page)
Four stages in user journey across services
Why It Was Hard
Structural Gap
The product had just finished its first year. The early priority was validating whether the service was viable at all — a reasonable call.
But it left a structural gap: no one had systematically connected behavioral data with qualitative research to diagnose what was failing at each stage, and there was no experimentation infrastructure to transfer learnings across surfaces.
Building that cross-stage growth system required coordinating XFNs across product, research, content, visual, engineering, and operations — none of whom shared clean ownership of any single stage.
User experience dignose with qual and quant data
The Design Tension
There was also a design tension that couldn't be fully resolved: reducing friction in the purchase flow improved mobile completion but degraded editor workflow quality. The context reviewers needed to do good work was the same context users found burdensome to provide. That's not a problem with a clean solution — it's an ongoing negotiation that had to be designed into the system itself, not designed around.
Design Judgment
As the lead designer on a cross-functional team with no prior design infrastructure, the first judgment wasn't about what to design — it was about how to structure design's contribution so it could compound rather than dissipate. That meant making an explicit call on where to start and how to build a system that could transfer.
What Options Existed
Option A
Rebuild a unified journey from top-to-bottom.
Treat the four services as a progressive value ladder and redesign the full flow in phases. Optimized for system coherence.
The cost: hard to attribute what worked, high business risk during the rebuild period — and because each stage's improvements change constraints in others, this approach risked continuing to amplify an already leaky system.
Option B (Chosen)
Build a reusable growth system — validate the foundation, then scale it.
Distill the structural logic that governs all four services, validate it on one, then transfer it systematically. Optimized for measurability and low coordination cost.
The cost: slower coverage — the new $29 service would have to wait. Chosen.
The Decision and Why
The choice was Option B — and the reasoning was straightforward: we needed a reliable mechanism for knowing what worked and replicating it. Without a repeatable growth and experimentation system, every improvement would be a one-off. We'd always be rebuilding rather than scaling.
Starting on Resume Review ($19) was deliberate. Mid-tier: complex enough to stress-test the system across all four stages, low-stakes enough that early failures wouldn't damage the business. Critically, it sat at the decision pivot where users had to weigh real money against uncertain value — the hardest trust problem in the lineup. Solving it there meant the logic would transfer up to $89, $29, and down to the free tier.
The experimentation system was built to be transferable as a practice, not just a deliverable: identify essential units → test best variants at the unit level → introduce additional units when they add value → optimize at the system level through combinations, sequence, and length.
The same logic applied to different surfaces:
Applied to the Homepage
Essential modules → variants → additional modules → combinations and length
Applied to the Purchase Funnel
Essential inputs → variants → additional steps → combinations and sequence
Design Intervention
What Changed and Why It Mattered
Intervention 1: Activation around behavioral timing, not channel volume
Judgement
Motivation peaks at specific job-seeking moments, not continuously.
Evidence
The product had 20+ entry points with dramatically varied conversion rates and no principled model for why.
Approach
  • Test seven high-intent, high-impact entry points across the ecosystem
  • Design solutions to improve motivation and reduce friction in the right timing with the right messaging
Result
~400× YoY traffic growth.
Not from new acquisition channels, but from activating demand that already existed at the moments it was strongest.
Dramatically varying conversion rates across the ecosystem
B=MAP shows behavior happens when motivation, ability, and a prompt converge at the same moment.
I cross-referenced conversion data with qualitative research and customer review analysis to identify moments when motivation, ability, and prompt timing could converge— users with strong apply intent, users actively editing resumes, users receiving rejection signals
Ideated on prompt messaging from quant data analysis and customer review analysis
We ideated on motivation levers, friction reduction, and prompt messaging for each moment, then tested seven high-intent, high-impact entry points across the ecosystem
Intervention 2: The Homepage as Decision Support, Not Persuasion
Judgement
Treat the homepage as decision support, not persuasion.
Evaluation must educate value, build trust, clarify options, and reduce decision paralysis.
Evidence
Users could not evaluate value or confidently take action:
  • 80% homepage drop-off
  • Only 25% scrolled below the fold
  • UXR results showed low value perception and weak differentiation— users couldn't answer "which service should I choose and why?"
Strategy
  • Reduce first-fold drop-offs
  • Deliver clear value differentiation across Free→$19→$29→$89
  • Drive immediate action
  • Improve holistic value communication (in modular testing)
Reducing high drop-offs: Prioritized high-impact content above the fold
Reducing high drop-offs: Designed visual cues to encourage scrolling near the average fold
Deliver clear value differentiation: Service & Pricing section after 5 iterations
Drive immediate actions: Surfacing an engaging service-start question prominently, letting users signal intent before being asked to commit.
Execution: Apply the experimentation system to homepage
  • Modularize the page
  • Produce multiple variants per essential module
  • Add modules only when they add value
  • Test combinations, sequence, and length to maximize value education, trust & credibility.
Result
  • 50% increase in homepage conversion of Resume Review $19 service
  • Qualitatively, stronger value perception & differenciation and user trust
"They do a pretty good job of covering everything and setting expectations. It really makes it easy to understand exactly what I'm getting and how much I’m paying. The process was very straightforward." — Usability testing participant
Modularize Homepage into two goals: value education and trust & credibility. Then test combinations, sequence, and length.
Ideated on variants per essential module
Homepage variant example
Loading...
Intervention 3: The Purchase Funnel as a Two-Sided Information System
Judgement
  • Design purchase as a signal-collection and seriousness-signaling layer.
  • The bottleneck was information quality and perceived value, not step count.
  • Experiments should build a solid foundation with essential inputs first, then optimize performance.
  • Essential inputs must serve users, editors, and the platform.
  • Apply behavioral science to encourage actions in each input.
Evidence
  • 97% drop-off
  • Long and frustrating flows, especially on mobile
  • Low perceived value, user trust, and payment method convenience issues during checkout
  • Editors were receiving low-quality inputs that slowed delivery and introduced quality risk
  • Fewer steps without the right inputs would hurt editors. More steps without higher quality would hurt users. Both failure modes had the same root: the funnel had never been designed to serve both sides at once.
Long and frustrating 11 steps of $19 serivce on mobile
Execution: Apply the experimentation system to wizard funnel
  • Identify four essential information inputs
  • Test variants for each essential input to improve clarity, effort, and downstream usefulness
  • After stabilizing essential inputs, add steps for deeper value communication and expectation setting
  • Test combinations, sequence, and length for best performance
  • The winning variant introduced a "top concerns" step upfront: a commitment device that improved value framing before payment and gave editors higher-quality briefs
Trade-offs
  • Input effort vs input quality (especially mobile)
  • Friction vs seriousness
  • Input quality vs delivery efficiency
Identify essential inputs that serve users, editors, and the platform — minimum viable information for a quality review
After stabilizing essential inputs, add new steps
Test combinations, sequence, and length for best performance: I proposed 10+ test ideas
The winning variant of $19 service introduced a "top concerns" step upfront: a commitment device that improved value framing before payment and gave editors higher-quality briefs
Experimentation gallery
The winning variant of $89 service: "Checkout first, add details later" mechanism — users completed payment with essential inputs, then provided richer context asynchronously before work began.
Before
After
0→1 launch of a live video session $29 service with no prior funnel to reference. Applying the essential-inputs-first approach to a new service format.
Result
Three services. Three complexity profiles. One transferable system.
Resume Review ($19)
  • Redesigned
  • +188% funnel completion
  • +11% desktop checkout
  • +22% mobile checkout
  • 4.6/5 quality rating maintained
  • ~25% faster editor workflow
Resume Writing ($89)
  • Redesigned
  • +355% funnel completion
  • +68% desktop checkout
  • +45% mobile checkout
  • 4.5/5 quality rating maintained
  • ~25%+ faster editor workflow
Real-time Resume Edits ($29)
  • 0→1 launch
  • 150+ sessions held
  • 4.6/5 quality rating
Intervention 4: Post-purchase as a behavior-change surface, not a delivery endpoint
Judgement
Purchase completion was never the success criterion — behavior change inside the Indeed ecosystem was.
Intervention
We kept the intervention minimal but intentional: actionable guidance on how to improve resumes, and direct prompts to apply those improvements inside their Indeed profiles. The question was whether users would act at all after receiving their results. We instrumented for that first.
Result
They did: users were 93% more likely to update their resumes, 55% more likely to receive employer signals, and the product generated 212K new account and profile creations.
Actionable guidance + "edit resume" framing on one of the results page
What Was Intentionally Left Out
We knowingly under-invested in the free tier — Instant Resume Report — across both activation and evaluation. We knew a meaningful segment of users wanted to try the free service first and pay only if the value was clear. That "try before you buy" path was real, and we chose not to support it fully.
The free tier's underlying service quality was also unstable at the time: it relied on early LLM infrastructure that we didn't have the resources to continuously optimize. Investing heavily in activating and evaluating a service we couldn't consistently deliver would have created a different problem — driving users into an experience that undermined trust rather than building it.
The call was to concentrate resources on the paid tiers where quality was controllable and the value signal was clearer. The cost was leaving a legitimate conversion path underserved.
Impact
The metric that mattered most wasn't inside the funnel. Users who completed a service were 93% more likely to update their resumes in their Indeed profiles, 55% more likely to receive employer signals, and the product generated 212K new account and profile creations. That's the outcome that validated the original framing: real business impact required behavior change inside the Indeed ecosystem, not just purchase completion.
System throughput improved across all four tiers:
Revenue YoY
4M+
Job Seekers Helped
2%
End-to-End Conversion
Up from 0.5%
212K
New Account & Profile Creations
25%+
Faster Editor Workflow
"It impacted the creativity and variety of opportunities I can think of for our funnel, and the level of testing and iteration we can do." — Product leader.
What I'd measure next: whether improved resumes generated better-fit interviews — closing the loop from service quality to ecosystem health. That's the outcome that would justify continued investment in the service line, and the one hardest to instrument within the project's scope.
Trade-offs & Reflection
The Free Tier Decision
This decision is the one I'd revisit most. The rationale for under-investing was sound — unstable service quality made aggressive activation risky. But in hindsight, we conflated two separate problems: service stability and experience investment. Even with GPT-3 limitations, a more intentional evaluation surface could have helped users understand what the free tier was and wasn't, and set expectations that made the "try before you buy" path more viable. We left that conversion bridge unbuilt, and I don't think we fully stress-tested whether we had to.
Preserved Friction in the Purchase Funnel deliberately
The "top concerns" step adds cognitive load before checkout. It's a bet on seriousness over brevity — users who articulate what they want get better outcomes, and that downstream quality signal justified the upfront cost.
The Assumption I'm Least Confident In
That high-intent activation scales cleanly as the ecosystem grows. The seven entry points were effective in a bounded product context. As the product surface area expands, the behavioral timing model needs continuous re-evaluation — not treatment as a fixed map.
What I'd Change
With more time, I would have unified data instrumentation across services much earlier. Funnel metrics and ecosystem impact weren't cleanly comparable across tiers because each service had been instrumented independently. That made cross-service learning slower than anything else, and it's the single change that would have compounded the most.
I'm drawn to problems where the growth constraint is hiding inside the system's own structure — where symptoms look like demand gaps but the actual failure is in how the product helps people make decisions, build trust, and take action. My instinct on this project was to refuse the framing that more traffic would fix what was fundamentally a decision-quality problem, and to invest in a repeatable learning mechanism before scaling reach.
What I find satisfying about this project isn't the 8× outcome. It's that the growth and experimentation system we built is what made the 8× repeatable across four services, three complexity levels, and one year of product growth.