Why good ideas fail: measure adoption before it’s too late
Published in Public Health, Statistics, and Business & Management
  Explore the Research
Just a moment...
Verifying you are human. This may take a few seconds.
Every year, governments, philanthropic funders, and industries invest billions of dollars in what is announced as “the next big thing.” These initiatives are often framed as breakthroughs capable of transforming society: digital health platforms promising to revolutionise the delivery of care, artificial intelligence tools designed to enhance decision-making across sectors, renewable energy solutions intended to reshape entire economies, and innovative models of care developed to address longstanding inequities.
In their early stages, these projects appear unstoppable. Pilot programs deliver impressive results. Presentations captivate policymakers, investors, and communities. Early data spark confidence that something large, scalable, and transformative is within reach. Hope is high.
Yet the glow of promise often fades. Many of these initiatives fail to take root in the real world. They are not dismissed because the ideas were inherently poor or the evidence was invalid. Instead, they stumble because one fundamental question was overlooked: will people actually adopt these solutions, and will they continue to use them over time?
This persistent gap between invention and adoption is among the most costly weaknesses in modern innovation systems. Health services lose millions when pilot projects never scale. Climate programs stall when proven solutions fail to embed in practice. Education reforms falter when communities lack the capacity to carry them forward. The result is not just wasted resources, but delayed progress on the urgent challenges of our time.
Our era is defined by complexity, marked by challenges such as climate resilience, the responsible development of AI, the pressures of ageing populations, and the widening digital divide. These wicked problems demand more than inventive ideas and early evidence. They require rigorous methods that assess whether innovations can survive beyond the pilot stage, be adopted by people in their daily lives, and endure across various systems and contexts. Without adoption, innovation risks becoming an endless cycle of beginnings that never deliver their promise.
Why adoption matters for wicked problems
Wicked problems cannot be solved by technology alone. They require approaches that also consider behaviour, culture, systems, and context. Evidence of efficacy, while important, is not sufficient in itself. Decision-makers need to know whether people will adopt an innovation and continue to use it. Without adoption, even the most urgent and promising solutions (such as digital transformation in hospitals, large-scale renewable energy deployments, or carbon-neutral technologies in mining ) risk being wasted.
Introducing an approach to close the adoption gap
These concerns about wasted potential and stalled adoption highlight a persistent gap: innovations are often tested for efficacy but not for their long-term usability, adoption, and sustainability. Addressing this gap requires an approach that is both scientifically rigorous and practically adaptable across complex systems.
To meet this need, an evaluation framework was created within implementation science that integrates multiple disciplines and perspectives. Initially referred to as PROLIFERATE, the framework provided a structured way to co-design, measure, and optimise innovations with those who would eventually use them. As artificial intelligence methods for predictive modelling and optimisation were incorporated, the framework evolved into PROLIFERATE_AI.
This approach combines a stepwise process with a multimethod foundation, enabling teams to move beyond traditional pilot testing. It allows research groups, industry leaders, policymakers, and communities to evaluate adoption before it is too late, while also ensuring that innovations can adapt and remain sustainable over time. What follows is an outline of how this approach works and how it can be applied to tackle the kinds of wicked problems where adoption is often the missing link between promising ideas and real-world impact, escalation, and sustainability.
A multimethod foundation
PROLIFERATE_AI is designed as a multimethod framework to address exactly this gap. It combines several complementary methodological traditions, each providing a distinct lens.
It includes Participatory Action Research (PAR), which positions end-users, implementers, and decision-makers as co-researchers. This ensures the evaluation is grounded in lived realities and not in abstract assumptions detached from practice.
It also incorporates the Knowledge Translation complexity network model (KT-cnm), which helps map how innovations flow through networks of actors (policy, service, community, research, industry) and identifies where adoption accelerates or stalls.
Expert Knowledge Elicitation (EKE) is another essential component. This structured approach captures expert probability judgements about adoption constructs, making their uncertainty explicit, calibrated, and aggregated.
Additionally, Bayesian updating and predictive simulation are employed to integrate expert priors with pilot user data. Monte Carlo simulation and bootstrapping explore uncertainty and run “what if” scenarios before scaling.
Finally, qualitative thematic analysis is used to analyse free-text responses, linking motivations, barriers, and optimisation strategies to the same constructs that are explored statistically.
Together, these complementary traditions make PROLIFERATE_AI both a formative tool (to optimise during rollout) and a summative tool (to assess adoption potential).
Capturing adoption constructs
At the heart of PROLIFERATE_AI are five adoption constructs: Would Understand (Comprehension), Would Use (Barriers), Would Preferentially Use (Motivations), Would Enjoy (Emotional Engagement), and Optimisation Strategies (Free Response).
Crucially, all methods and others (including Expert Knowledge Elicitation, surveys, digital traces, biomarker sensors, econometrics, and qualitative techniques) can be applied across all five constructs. This methodological flexibility allows researchers to triangulate findings and adapt to different contexts while maintaining rigour.
For Comprehension (Would Understand), evaluations can include Likert-scale surveys, think-aloud protocols, cognitive interviews, and digital traces such as error rates or navigation patterns. Experts can also be asked to provide probabilistic estimates of expected comprehension.
For Barriers (Would Use), data can be drawn from expert judgements on likely uptake, ethnographic observation of workflows, usage logs or system audit trails, and even wearable sensors that capture workload or stress in real time.
For Motivations (Would Preferentially Use), techniques such as discrete choice experiments, conjoint analysis, semi-structured interviews exploring perceived benefits and trade-offs, survey rankings of motivational factors, and expert judgements on preference drivers can all be applied.
For Emotional Engagement (Would Enjoy), researchers can use free-text responses coded for affect, sentiment analysis of written or spoken input, physiological biomarkers such as heart rate variability or galvanic skin response, facial expression or voice tone analysis during use, and expert judgements on expected emotional responses.
For Optimisation Strategies (Free Response), data can be collected through open-ended survey items, PAR-based co-design workshops with end-users, Delphi or nominal group methods for prioritising changes, embedded feedback tools in digital systems, and structured expert elicitation on the most effective optimisation levers.
By drawing on these varied sources (e.g., self-report, expert judgement, direct observation, digital traces, and biomarkers), PROLIFERATE_AI generates a multidimensional and equity-sensitive picture of adoption dynamics.
A practical pathway for researchers and teams
The PROLIFERATE_AI pathway can be described in a step-by-step manner.
The first stage involves framing the study. This involves convening a transdisciplinary panel of experts, practitioners, and community representatives. The team uses the Knowledge Translation complexity network model to map the system, identify critical nodes and hubs, and locate the stages of knowledge translation where adoption may be supported or blocked. The study is then registered for both formative and summative evaluation.
The second stage involves defining the constructs. The five adoption anchors are co-designed with the panel to ensure cultural and contextual fit.
The third stage involves designing the instruments. Surveys, elicitation protocols, digital metrics, and qualitative questions are combined into a tailored evaluation package. These instruments are piloted with the panel and refined using predictive simulation.
If considered relevant, the fourth stage involves running Expert Knowledge Elicitation. Experts provide quantiles for each construct, probability distributions are fitted, calibration is checked, and results are pooled to create well-calibrated priors.
The fifth stage involves collecting end-user data. Instruments are administered to frontline users across diverse roles and demographics. Open text and continuous feedback loops are included to capture the lived experiences of adoption and/or Expert Knowledge Elicitation surveys.
The sixth stage is to analyse quantitatively. Expert-derived distributions are treated as priors and updated with user data using Bayesian inference. Simulation is used to estimate uncertainty and to test alternative scenarios.
The seventh stage is to analyse qualitatively. Free responses are thematically coded, themes are mapped to constructs, and interpretations are validated with the transdisciplinary panel.
The eighth stage involves integrating and scoring. At this point, Bayesian posteriors from the quantitative analysis are combined with qualitative insights from thematic coding to produce an adoption profile. Outcomes are classified into four tiers (poor = 0–1 constructs above benchmark, average = 2 constructs, good = 3 constructs, excellent = 4–5 constructs). This provides a structured picture of adoption strength, including subgroup variation. For example, nurses may find an innovation engaging while junior doctors struggle to comprehend it. The scoring system, therefore, functions as a diagnostic tool, showing where adoption is strong, where it is weak, and for whom.
The ninth stage is to co-design optimisation strategies. Once areas of weakness are identified, the transdisciplinary group develops tailored solutions. If comprehension is low, onboarding can be improved. If emotional engagement is lacking, targeted campaigns or role-specific training can be created. What matters is that responses are generated collaboratively, so they fit the lived realities of end-users rather than external assumptions.
The tenth stage involves iterating longitudinally. The evaluation cycle is repeated at multiple timepoints (for example, pre-launch, post-training, six months, and one year). This enables teams to track adoption trajectories and test whether interventions alter patterns of practice. Over time, the system evolves from a one-off diagnostic into a learning tool that embeds adaptation and continuous improvement.
From evidence to action
The value of PROLIFERATE_AI is not only in identifying adoption challenges but in turning evidence into practical action. Because the framework is participatory, the same panel that diagnoses problems also helps design solutions that can be used immediately. This might mean producing policy briefs that translate findings for decision-makers, developing onboarding packages that address specific comprehension gaps, or designing communication strategies that resonate with diverse communities. What distinguishes this process is that the recommendations are co-created, increasing their legitimacy and likelihood of uptake.
The framework also enables decision-makers to prioritise. Instead of broad, generic advice, they receive targeted guidance on which groups require support, which aspects of the innovation need refinement, and what resources are likely to make the greatest difference. In this way, evaluation becomes a strategic asset for planning and resource allocation rather than an academic exercise.
Beyond snapshots: longitudinal evaluation
PROLIFERATE_AI also recognises that adoption is not static. Contexts shift, user groups change, and external pressures (such as new policies, economic shifts, or technological updates) can reshape the environment in which an innovation operates. A one-time assessment cannot capture these dynamics.
By applying the framework longitudinally, teams gain insight into whether adoption stabilises, grows, or declines. They can see if early enthusiasm fades, if refinements are needed, or if different strategies are required to maintain engagement. This turns evaluation into a continuous feedback system, allowing organisations to anticipate obstacles, sustain momentum, and adapt interventions to keep innovations relevant. The result is not only better evidence but stronger pathways to long-term adoption and impact.
Closing reflection
In an era defined by wicked problems, we must move beyond invention. The real challenge is not only to build innovations but to embed them in real systems, for real people, in ways that endure. This requires methods that are participatory, network-aware, statistically rigorous, and sensitive to lived experience. PROLIFERATE_AI is one such approach. It demonstrates that adoption is not a random occurrence. It can be measured, modelled, and optimised.
Key references and applications
- Co-designing, measuring, and optimizing innovations and solutions within complex adaptive health systems. Piñero de Plaza MA, Yadav L, Kitson A (2023). https://doi.org/10.3389/frhs.2023.1154614
 - Human-centred AI for emergency cardiac care: Evaluating RAPIDx AI with PROLIFERATE_AI
Maria Alejandra Piñero de Plaza, Kristina Lambrakis, Fernando Marmolejo-Ramos, Alline Beleigoli, Mandy Archibald, Lalit Yadav, Penelope McMillan, Robyn Clark, Michael Lawless, Erin Morton, Jeroen Hendriks, Alison Kitson, Renuka Visvanathan, Derek P. Chew, Carlos Javier Barrera Causil (2025). https://doi.org/10.1016/j.ijmedinf.2025.105810 
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in