Biologists work to understand the most complex systems in human experience, but they’re held back. Right now, they’re limited by things like paper notebooks, pipettes, USB drives, and Excel spreadsheets. Taking advantage of digital tools and automation will enable an utterly transformed scale and sophistication of science, one that matches the ever-growing ambition and complexity of bioscience endeavors. But how should a research organization choose the digital tools that will enable this transformation, and how can we make sure these tools are adopted?
The key is to focus on the experience of the scientist who runs experiments.
How could digital tools make it easier for scientists to design the experiments that address the reality of biological complexity? How could those same tools help solve the planning and execution problems associated with the same? How can those very same digital tools gather both the experimental data and metadata, structuring and storing it in a ‘FAIR’ (findable, accessible, interoperable, and reusable) way to make analysis and reporting better now, and in the future?
It’s not an easy challenge but, when it works—and I’ve seen it happen—the results are incredible.
The Urgent Need to Find and Use the Right Tools
The end goal is producing a digital representation of what goes on in the lab and the results this work generates. Right now, we typically do a weak version of this with electronic lab notebooks (ELNs). While there’s an abundance of simple and intuitive ELNs on the market, they place all of the onus on scientists to record their work in such a way that their methods and results stand a chance of future reuse. Like anything else we do, this process is subject to human error and the number of available hours in a day.
The future of digitization will look much more sophisticated. Digital tools will enable scientists to design and plan experiments in the digital world from the very start, run them in the real world, and then automatically gather the generated data and metadata that describe exactly what’s been run. Combined, this lifts the burden of detailed technical record-keeping from scientists while also letting them run far more ambitious experiments.
How Do We Make Those Tools Stick? It’s All About the People
Finding the right digital tools and platforms is only the start. There are many more barriers to the digitized lab that have nothing to do with the technology itself; they’re more to do with human nature and the activation energy needed for adopting something new.
The vast majority of scientists are, unsurprisingly, busy doing science. In my experience, they don’t always feel like they have the time to take a step back and learn a new tool, even if it’s one that could transform the work they do and broaden the horizons of possibility. This places a huge emphasis on making these tools a delight to use. User experience is as important as the power of the features in a given piece of software. If it’s a pain to use, you’re not going to use it—no matter how good it is.
But then what good is a hugely powerful tool if it never sees the light of day? Even with beautifully designed software, it’s still important to have a change management strategy: open-door training, case studies that show the improvements this new tool brings or, even better, examples of scientific findings that would otherwise have been impossible. This is often best served by making an impact and deploying new tools in one place first and making sure adoption is solid before moving on to bigger and better things.
Okay, But Now What? Building Towards Your Lab’s True Potential
With your tools established and well adopted, you’re ready for the hard part: moving towards high-performance enterprise digitalization. This works in three distinct stages, all of which must be taken step by step. Many have tried to skip ahead here. Many have failed.
The first step is broadening the impact of what you adopted earlier. This means building outward from a single-use application to multiple applications, benefits and impacts throughout your organization. To do this, your software must have the flexibility to accept different parameters for different experimental uses, be able to use templates to save time ‘building from scratch’, and have the ability to integrate with different equipment and software so that it can work wherever it’s needed.
With these developments in hand, the next step is to aim for organization-wide scalability. Here, we must aim to tighten the bond between R&D and IT. Your concerns at this stage will look very different from your early considerations where you were trying to land one tool and make it work. Here, a tighter bond will unlock the true potential of the software you’re using and enable ‘global’ adoption and transformation in your organization.
Without this bond between R&D and IT, you’ll find it hard to move to the final level of maturity: optimizing for deeper intelligence. With more of your R&D data sets properly structured, fully contextualized and entirely digital, it becomes much easier to plug into artificial intelligence and machine learning technologies.
With the Right Foundations, We Move Towards Better Science
Digitizing R&D is a huge challenge. It needs the right digital tools and platforms, those tools and platforms need great UX, and they must have the potential for scaling across whole organizations. To succeed, they must be deployed with diligent change management in a culture that promotes and rewards innovation with new tools.
If this sounds like a lot, it is. But the rewards will be massive. When we leap from the tedium of slow, manual, error-prone lab work and into a world where scientists conduct an orchestra of fully digitized labs aided by AI and automation, we’ll have entered a new era. The limits on the experiments we can run will evaporate and our ability to work with biology will finally be a match for the scale of the problems we need to solve.