Skip to content
All work
Copperleaf logo
Enterprise AICopperleaf2022Product Manager

Copperleaf — 2× adoption on the AI optimizer across $2.6T in infrastructure

Doubled the number of users actively running AI portfolio optimization across $2.6T of infrastructure assets in oil, gas, electric utilities, and water — by redesigning the UX around AI transparency, scenario lifecycle, and trust in the underlying Gurobi mixed-integer solver.

Users optimizing scenarios
SUS (50 → 60)
+10
Customer retention
~100%
Assets under decision
$2.6T
StackGurobi MIP solverScenario lifecycle UXAI transparency + pre-checksCustomer User Journey

Problem

Copperleaf C55 was designed to help enterprises optimize multi-billion-dollar infrastructure investment portfolios — across oil, gas, electric utilities, and water — using an AI-powered decision engine running on the Gurobi mixed-integer programming solver. The model balances scenario factors a planner usually has to weigh by hand: resources, outages, timing, dependencies, value, cost, risks. The math was sound and the recommendations were defensible.

Adoption still stalled. The primary roadblock was the perception of overpricing, compounded by a lack of understanding of how the AI optimization process actually worked. Users spent excessive time on manual post-processing because they didn’t trust or understand the model. Gurobi’s opacity made it difficult to grasp how decisions were made; without a way to visualize the optimization, users couldn’t integrate it into their workflow.

Approach

I led the Empowered Scenario Experience initiative — a UX, AI-transparency, and customer-journey redesign that re-positioned the optimizer from "black-box solver" to "decision tool you can defend in a room." The hero above shows the redesigned scenario lifecycle (Draft → Submitted → Approved) with profile icons, folders, tags, and custom metadata so every decision has provenance. The diagnosis came from sitting with users: the math wasn’t the problem, the story a planner had to tell after running the math was the problem.

Glitchy. Fails too often to rely on. I run the optimization, then spend more time fixing what it gave me than I would’ve spent doing it by hand.

Customer User Journey field notes
  1. Visual scenario lifecycle.

    Introduced a status progression — Draft → Submitted → Approved — with profile icons on approvals so the audit trail was visible at a glance. Added folders, tags, and custom metadata so users could organize by program, business unit, or fiscal year and find the decision history later.

  2. Investment grid, decluttered.

    The grid was the bottleneck — cluttered, time-consuming, overwhelming. Introduced color-coded functions, customizable preferences, icon-based tooltips, and decision-area panels so users could navigate the workflow without learning the schema.

  3. Customer User Journey initiative.

    Saw the platform end-to-end through users’ eyes. Rewrote the system feedback to clarify what was happening, why it was necessary, and how to address potential issues. Built training materials to disambiguate portfolio vs scenario — the single most common point of confusion.

  4. Pre-check before optimization.

    Built a pre-flight check that surfaced avoidable issues (missing constraints, misconfigured budgets, infeasible plans) before the user hit Run, instead of waiting for Gurobi to fail mid-run. Failure rate dropped, trust climbed.

  5. Probability spectrum education.

    Surfaced the model’s heuristic nature explicitly — high / mid / low confidence on each output — so users calibrated expectations instead of treating the optimizer as oracle. The trust gain came from honest uncertainty, not from hiding it.

The Gurobi parallel mixed-integer solver under the hood. Click to enlarge — the redesign didn’t change the math, it changed how much of the math the user had to internalize before they could act on the result.

The Gurobi mixed-integer programming solver running underneath C55 walks a branch-and-bound tree intelligently — pruning whole regions of the search space — to find the optimal subset of investments across hundreds of candidate projects, multiple budget constraints, and decade-long planning horizons. The kind of math that lets a utility planner answer "of these 4,000 work packages, which 60 maximize reliability under a $480M budget, while respecting outage windows and resource constraints?" in seconds.

What the solver can’t do is explain itself in plain English. Gurobi returns an optimal objective value and a binary inclusion vector — not a narrative a planner can take into a stakeholder review. The Empowered Scenario Experience didn’t change that math. It changed how much of the math the user had to internalize before they could defend the result.

The optimizer’s math was never the bottleneck. The bottleneck was a user trying to defend an AI recommendation in a budget meeting without a story. We didn’t change the model — we gave them the story.

Empowered Scenario Experience retrospective

What shipped

  • Scenario lifecycle, navigable.

    Draft / Submitted / Approved status progression with approver profile icons; folders, tags, and custom metadata for filtering and grouping. Decision history is now visible without leaving the scenario, and scenarios can be searched and organized the way an enterprise actually plans.

  • Investment grid redesign.

    Color-coded functions, customizable preferences, panels per decision area, icon-based tooltips. Stops being the part of the workflow users dread.

  • Pre-check engine.

    Catches infeasibility, missing inputs, and budget misconfigurations before the optimizer runs. Most failures become warnings before they become errors.

  • Probability disclosures on outputs.

    Each recommendation carries a confidence band so users plan around model uncertainty instead of being surprised by it.

  • Customer User Journey artifact.

    The end-to-end map became a shared reference across product, support, and sales — every team now plans against the same workflow, not their slice of it.

Outcome

Adoption

2× users optimizing

The number of users actively running scenario optimization doubled. AI moved from "interesting feature" to "part of how investment portfolios get built."

Usability

SUS 50 → 60

System Usability Score gained 10 points — a substantial shift in the perceived ease of doing the core job. TAM acceptance rose by the same margin (60 → 70).

Retention

~100% on platform

Customer retention approached 100% during the same window. The optimizer became indispensable, not interchangeable — and the renewal motion didn’t need a discount to close.

Reach

$2.6T under decision

The decision engine now sits behind investment portfolios across oil, gas, electric utilities, and water — globally — with the same UX surface every operator now trusts.

What I’d do differently

  • Build trust into v1, not phase 2. The pre-check engine and the probability disclosures were both trust-recovery work — much cheaper to prevent the first failed run than to win a user back from it. Honest uncertainty + early-failure protection belong in the original PRD, not the next-quarter backlog.
  • Treat the audit trail as a sales artifact. The Draft → Submitted → Approved lifecycle was a UX decision, but it was the strongest enterprise-procurement story the team had. Should have led the GTM with it: every CIO conversation needed "who approved what, when, and why" answered in one screen.
  • Pair UX work with a customer-co-design loop, not a one-off journey project. The Customer User Journey initiative landed precisely because it was field-grounded. The mistake was treating it as a project. The pattern (live customer panel → feature decisions → re-validate quarterly) should have been the operating model from day one — every case study where I’ve simplified ruthlessly came out of running this loop.

Related work