MIT Media Lab's 2025 GenAI Divide study measured what most organizations already suspect but haven't quantified: across $30–40 billion in enterprise AI investment, only 5% of initiatives are producing measurable returns. The other 95% stall, underdeliver, or quietly die in pilot.
The cause, according to the researchers, is not flawed algorithms or insufficient compute. It is "a lack of alignment between technology and business workflows," compounded by "skills gaps, workforce resistance, and cultural barriers."
The technology works. The translation doesn't.
McKinsey identified this archetype years ago—the "analytics translator"—and estimated demand for this profile in the United States alone would reach into the millions. That prediction has been validated and then some. But the supply has not materialized, because career paths don't produce this person naturally.
The distinction is critical: most companies are not building AI. They are implementing it, integrating it, and operationalizing it. As one strategist observed, "Most companies aren't building LLMs; they're implementing them. And that's a very different skill set." Building a foundation model requires PhDs in statistics and deep learning. Implementing AI into a claims workflow, a reporting pipeline, or a product experience requires someone who understands the business domain, can prototype rapidly, and can communicate value in language the CFO and the frontline worker both understand.
McKinsey's own 2025 research found that organizations reporting significant financial returns from AI are twice as likely to have redesigned end-to-end workflows before selecting modeling techniques. That is fundamentally a design and systems-thinking exercise—not an engineering one.
The World Economic Forum's 2025 Future of Jobs report adds another dimension: creative thinking is the #4 most important core skill employers identify globally, cited by 57% of respondents—yet it is "among the least acknowledged in hiring and promotion decisions."
The research converges on a single conclusion: if you find this person already inside your organization, the highest-leverage move is to invest in them.
This mismatch is particularly acute in healthcare and life sciences. The product-minded AI strategist in this sector needs to understand how a prior authorization workflow actually functions end-to-end before proposing AI augmentation. They need to understand why a clinician will reject an AI-generated recommendation that doesn't align with clinical judgment, regardless of the model's accuracy score. They need to understand what "explainability" means to a compliance officer versus a data scientist, and how regulatory constraints shape what can and cannot be automated.
No amount of TensorFlow expertise substitutes for that domain fluency. The organizations building AI strategy roles around business and clinical domain experts—then upskilling them on AI capabilities—are the ones moving from pilots to scaled production while competitors cycle through their third failed consulting engagement.
The rarest version of this profile is someone who developed it organically—through a career trajectory that conventional hiring would never surface but that produced exactly the combination the market says it cannot find.
Consider someone whose foundation is systems thinking about people and institutions. A sociology degree that trained them to see how complex human systems interact, how communication flows through organizations, how behavior is shaped by structure, shared knowledge, and context. Layer on an MBA in marketing, not for the credential, but for the analytical framework: data-driven understanding of human behavior, audience segmentation, messaging architecture, conversion systems, pricing strategy. Then 15 years of applying that lens to progressively complex business infrastructure: digital strategy, analytics platforms, go-to-market architecture, website strategy, ABM programs, brand consolidation across acquisitions, reporting systems that connect marketing activity to pipeline and revenue impact.
That person has been doing systems design, stakeholder orchestration, and cross-functional translation for fifteen years. They just haven't been calling it AI strategy.
Now give that person nine months of intensive AI system-building. Not courses. Not certifications. Building.
Nine major production systems built from zero programming experience in nine months. Each compounds on the infrastructure of the ones before it.
The acceleration pattern tells the story of infrastructure thinking: the first project took twelve weeks. By the tenth, the same scope of work took two days. Infrastructure compounds. Every system reduced the activation energy for the next.
And through this work, an unexpected discovery: twelve structural parallels between ADHD cognition and AI agent architecture. Not metaphor, but structural equivalence. Context management crises, the documentation imperative, orchestration requirements, acceleration through constraints, infrastructure-first design. The infrastructure my ADHD brain requires to function is the same infrastructure AI systems require to perform. That insight didn't just explain my trajectory, it became my "why," and the architectural foundation for everything I've built since.
The market research is unambiguous: the profile that determines whether enterprise AI investments produce returns or become expensive write-offs is not a machine learning engineer. It is a systems thinker who can translate between business reality and technical capability, prototype rapidly, redesign workflows before selecting models, and communicate across every constituency in the organization.
That profile is rare. It cannot reliably be hired externally. The organizations that recognize it internally—and invest in it—are the ones converting AI from pilot to production while the rest cycle through consultants.
The work is not theoretical. It is already built, already scoped to Cedar Gate's specific operations, already demonstrating the compound returns that infrastructure thinking produces. The question is not whether this capability exists here. It does. The question is what we build next.