A week before the official German COVID-19 contact tracing app (“Corona-Warn-App”, or CWA for short) landed in app stores on June 16, 2020, we had no idea that this app would keep us busy for the rest of the year. Even two weeks later, after the app had been downloaded many millions of times and was quickly becoming one of the most popular contact tracing apps, our research idea had not yet been born. In fact, we were busy designing and implementing a small research project on public opinion during the pandemic. It wasn't until another week later, in early July, that the penny finally dropped. We got in touch with the polling company we had been collaborating before and asked two simple questions: First, would they be ready to share data from their special tracking panel on mobile app usage. And second, would they let us run some experiments on that panel that might affect panelists’ app usage behaviour?
Passive metering software provides a 360-degree view of browsing and mobile app usage behaviour.
Web and mobile tracking data have been on the radar of social scientists for several years. Technology companies like RealityMine or Wakoopa have developed passive metering software that, once installed on participants' desktop or mobile devices, provides a 360-degree view of browsing and mobile app usage. Market research companies have integrated this software into panels to gain valuable additional information about their panelists as consumers, but it is also extremely valuable for social science research. Consider communications as an example. Until a couple of years ago, it was tedious and unsatisfactory to measure and analyse news consumption using direct survey questions or diaries. Suddenly, behavioural data could be collected automatically and in unprecedented detail. That was the first time we had worked with this kind of data. In another research project, we analysed the influence of online partisan media on political opinion-forming and also the effects of voting advice applications on political attitudes and behaviour.
Could uptake possibly be further increased through targeted appeals and informative messages?
The technical skills we gained from these projects laid the foundation for our work on the COVID-19 contact tracing app. It became clear quite early on that, despite the app's apparent great success, it would not achieve the level of uptake in the medium term needed to effectively support contact tracing. Could uptake be increased using targeted appeals and informative messages? On a more fundamental level, we also had doubts about the app's dissemination patterns: What if the app was mostly used by those who would adhere to the non-pharmaceutical measures such as social distancing anyway?
A major obstacle to answering these kinds of questions is the privacy-by-design principles which have guided the development of many DCT apps. Mere download statistics are silent about actual usage and user profiles, and while surveys can be employed to monitor usage patterns, the reliance on self-report for sensitive behaviours makes them susceptible to reporting biases. This is where the combined survey-and-tracking setup comes in handy.
From this point on, things moved pretty quickly. Funding for the survey setup was already in place and the addition of a tracking component did not make the project much more expensive. The work now moved to conceptualising the survey and the treatments we developed to boost app usage. The project was pre-registered on July 29, about four weeks after the project idea was born, and we started data collection. A second pre-registration for the later waves and experiments followed. Half a year later, the results were published in Nature Human Behavior.
The tracking data provided insights which would not have been available from aggregated download statistics or classical survey data alone.
One big advantage of our design was that it allowed for retrospective data collection. Even though the survey did not go into the field until about six weeks after the app launch, we were able to work with data since the beginning, which was key to retrieve reliable measures of app usage. As it turned out later, however, the fact that we had arranged things so quickly did have some unfortunate consequences: Only after we were in the field did we notice that, due to a technical failure, no app usage was being collected from iPhone users. This data could not be retrospectively collected. One tracking on Apple devices was fixed we decided to collect more data on all devices after the original data collected period had ended.
Tracking data provided insights which would not have been possible with aggregated download statistics or classical surveys alone: We were able to observe individual app usage sessions, tracked to the second and over a period of more than a hundred days, even for tracking panelists who did not participate in the survey or who dropped out at one of the later waves.
We think it is worthwhile to explore the potential of this data further, as other researchers are now doing. Impact evaluations like ours can build on existing survey and tracking panel infrastructure in an increasing number of countries and markets. The costs of such studies are reasonable, especially when compared to the costs that usually go into the development and promotion of apps. At the same time, building informed consent into a commercial passive tracking panel allows using digital data without compromising privacy rights. We look forward to seeing more applications like ours.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in