Almost daily a new study is published proclaiming that smartphones, social media, and video games, among other digital media, are negatively affecting mental health or well-being. These technologies have been blamed for everything from increases in teen depression and suicide, to higher incidence of attention deficit hyperactivity disorder (ADHD), and violence. Frequently, this research is accompanied by provocative headlines that garner attention from parents, practitioners, and policymakers.
With very few exceptions, these studies rely on participant-provided estimates of their media use (self-reports) as a proxy for measures of actual media use. These studies use questions such as “On average, how many minutes do you spend on your smartphone on a typical day?” or “How many times did you open Facebook yesterday?” to measure a participant’s behaviour and draw inferences in relation to various outcomes. It is well-established in the socio-behavioural sciences that self-reports of behaviour reflect what participants believe they do, rather than what they actually do. This wouldn’t be much of a concern if people’s perceptions of their own behavior accurately reflected their actual behavior. However, due to a range of perceptual, cognitive and social biases, this is often not the case—especially when individuals are asked to estimate common and highly frequent behaviors. Despite this, researchers have relied on estimates of media use for decades to study how people use digital media and the potential outcomes that this behaviour can lead to.
Another way to quantify media use is to examine device logs—directly capturing on-device events. Recent technological advancements have made this approach more feasible, and as such its use within the literature is increasing (e.g., via Apple Screen Time or other usage-logging apps). These log measures provide a means of testing the convergent validity of self-report measures, and can address the question of whether people self-report similar media use patterns as their devices record.
Prompted by the publication of a number of studies indicating a weak association between self-report estimates and likely more accurate logs of device or application usage, and the work of researchers like Jessica Flake and Eiko Fried on measurement validity in the social sciences, I saw the need to systematically examine the literature and determine, given current evidence, the degree to which self-reported estimates of media use accurately reflect participants’ actual behaviour as indexed by logs of their usage. Given the near-universal adoption of self-report measures, and the wide-reaching individual and societal implications of research in this area, the validity and accuracy of these measures was an important unanswered question.
In the midst of the early days of the COVID-19 pandemic, I reached out to Brit Davidson (University of Bath, UK) on Twitter to discuss the possibility of a meta-analysis of research wherein both self-reported and logged measures of media use were collected. After working out an initial design for the study, we expanded the team to include the experience and expertise of Craig Sewall (University of Pittsburgh, USA), Jacob Fisher (University of Illinois Urbana-Champaign, USA), Hannah Mieczkowski (Stanford University, USA), and Daniel Quintana (University of Oslo, Norway). With none of us having ever met in person, this was truly a pandemic project. All communication has taken place through digital channels like Twitter, Slack, and Zoom. Now, almost a year since our first discussions, in a paper published in Nature Human Behaviour, we systematically demonstrate just how poorly self-reports of media use reflect device-logged measures.
To address the question of how closely self-report estimates relate to logs of actual usage, we set out to identify every study that compares logged or tracked media use measures with equivalent self-reports. Using a combination of automated searches, manual searches, and public calls, we screened over 12,000 articles for inclusion. In the end, we found 47 studies that included both types of measures. From these studies, we were able to identify and extract 106 comparisons.
Using standard meta-analytic procedures, we found that, while logged and self-reported measures are positively related, the strength of this relationship would generally be considered too weak to conclude that these measures are good substitutes for each other. Participants’ estimates of their usage were only accurate in about 5% of the studies we considered. We also found that while self-reports are rarely an accurate reflection of usage logs, there was no clear pattern of either under- or over-reporting. Participants under- and over-report their usage in equal measure. While we did not find evidence of any methodological moderators, other research has shown that self-reports of media use seem to reflect individual difference factors other than media use per se (e.g., depression).
In addition to these findings, we also investigated whether “problematic” (e.g., excessive or so-called “addictive”) media use measures were suitable substitutes for logged usage. Although there are well-documented validity issues with these measures (see this study for an innovative demonstration of this), researchers frequently use problematic media use measures to make claims about the drivers and outcomes of usage itself. Perhaps unsurprisingly, we found that there is an even smaller association with usage logs for these measures. This aligns with work that suggests that these measures are more reflective of various mental health outcomes than media use itself.
Taken together, our findings demonstrate that self-reports of media use do not exhibit convergent validity with device-logged measures of media usage. Across studies, contexts, media, and devices, our synthesis shows that, generally, the association between self-reports of media usage and equivalent usage logs is insufficient for measures that supposedly index the same behaviour. Not only does this finding have far reaching implications for researchers across fields like communication, psychology, and information systems but, perhaps of equal importance, it suggests caution before implementing policy that restricts digital media use. Given the quality of current evidence, we simply do not know enough yet about the actual effects (both positive and negative) of our media use.
Researchers have overwhelmingly relied on self-report measures to study the uses and effects of digital media. Our data suggest that much of this work may be on unstable footing. Indeed, this research indicates that we may need to reconsider much of the extant evidence regarding harmful (and beneficial) effects of media use. Additionally, our findings suggest that we need to re-evaluate how we conduct our research. Valid measurement is fundamental to establishing the validity of scientific findings. Researchers interested in the uses and effects of digital media need to embrace the opportunities provided by new digital technologies and digital trace data, and to move beyond subjective measures in the study of media uses and effects.
For the public, our findings suggest that caution is warranted when considering research on outcomes associated with media use. We cannot simply take claims of harmful effects at face value. Our results imply that there is a need, amongst researchers, journalists, and members of the public, to reflect on the quality of evidence when engaging with research on media uses and effects.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in