Article Text

Original research
From screen time to the digital level of analysis: a scoping review of measures for digital media use in children and adolescents
  1. Dillon Thomas Browne1,
  2. Shealyn S May1,
  3. Laura Colucci1,
  4. Pamela Hurst-Della Pietra2,
  5. Dimitri Christakis3,
  6. Tracy Asamoah4,
  7. Lauren Hale2,
  8. Katia Delrahim-Howlett5,
  9. Jennifer A Emond6,
  10. Alexander G Fiks7,
  11. Sheri Madigan8,
  12. Greg Perlman2,
  13. Hans-Jürgen Rumpf9,
  14. Darcy Thompson10,
  15. Stephen Uzzo11,
  16. Jackie Stapleton12,
  17. Ross Neville13,
  18. Heather Prime14
  19. The MIST Working Group
    1. 1Psychology, University of Waterloo, Waterloo, Ontario, Canada
    2. 2Renaissance School of Medicine, Stony Brook University, Stony Brook, New York, USA
    3. 3School of Medicine, University of Washington, Seattle, Washington, USA
    4. 4Media Committee, American Academy of Child and Adolescent Psychiatry, Washington, District of Columbia, USA
    5. 5Division of Extramural Research, National Institute on Drug Abuse, North Bethesda, Maryland, USA
    6. 6The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth Geisel School of Medicine Global Health Tanzania DarDar Programs, Hanover, New Hampshire, USA
    7. 7Department of Pediatrics, Children’s Hospital of Philadelphia and Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
    8. 8Psychology, University of Calgary, Calgary, Alberta, Canada
    9. 9Psychiatry and Psychotherapy, Translational Psychiatry Unit, Research Group S:TEP, University of Luebeck, Lubeck, Germany
    10. 10School of Medicine, University of Colorado, Denver, Colorado, USA
    11. 11New York Hall of Science, Flushing, New York, USA
    12. 12Information Services and Resources, University of Waterloo, Waterloo, Ontario, Canada
    13. 13School of Public Health, Physiotherapy and Sports Science, University College Dublin, Dublin, Ireland
    14. 14Psychology, York University, Toronto, Ontario, Canada
    1. Correspondence to Dr Dillon Thomas Browne; dillon.browne{at}uwaterloo.ca

    Abstract

    Objectives This scoping review aims to facilitate psychometric developments in the field of digital media usage and well-being in young people by (1) identifying core concepts in the area of “screen time” and digital media use in children, adolescents, and young adults, (2) synthesising existing research paradigms and measurement tools that quantify these dimensions, and (3) highlighting important areas of need to guide future measure development.

    Design A scoping review of 140 sources (126 database, 14 grey literature) published between 2014 and 2019 yielded 162 measurement tools across a range of domains, users, and cultures. Database sources from Ovid MEDLINE, PsycINFO and Scopus were extracted, in addition to grey literature obtained from knowledge experts and organisations relevant to digital media use in children. To be included, the source had to: (1) be an empirical investigation or present original research, (2) investigate a sample/target population that included children or young persons between the ages of 0 and 25 years of age, and (3) include at least one assessment method for measuring digital media use. Reviews, editorials, letters, comments and animal model studies were all excluded.

    Measures Basic information, level of risk of bias, study setting, paradigm, data type, digital media type, device, usage characteristics, applications or websites, sample characteristics, recruitment methods, measurement tool information, reliability and validity.

    Results Significant variability in nomenclature surrounding problematic use and criteria for identifying clinical impairment was discovered. Moreover, there was a paucity of measures in key domains, including tools for young children, whole families, disadvantaged groups, and for certain patterns and types of usage.

    Conclusion This knowledge synthesis exercise highlights the need for the widespread development and implementation of comprehensive, multi-method, multilevel, and multi-informant measurement suites.

    • mental health
    • paediatrics
    • community child health
    • child & adolescent psychiatry
    • public health

    Data availability statement

    Data are available on reasonable request. All data relevant to the study are included in the article or uploaded as online supplemental information. Relevant data are included as online supplemental information. Extended data available by request.

    http://creativecommons.org/licenses/by-nc/4.0/

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Strengths and limitations of this study:

    • This scoping review has important and timely objectives, being among the first to synthesise the measurement tools that assess child digital media use on a large scale.

    • Many low-risk, reliable and valid measurement tools from a variety of databases, institutional reports and guidelines are included.

    • Data extraction focused on the source’s methodology (ie, the measurement tools), rather than the data of each source, presenting a novel approach to knowledge synthesis.

    • No measurement tools that are non-English or older than 5 years were included in this scoping review, limiting the sources that were assessed.

    • A variety of gaps in measurement were identified, including assessment for young children, whole families, disadvantaged groups, and non-self report scales.

    Introduction

    There has been a proliferation in studies examining the association between digital media usage in young people and various aspects of well-being, including neurocognitive development in youngsters,1 2 and anxiety and depression in children, teenagers and young adults.3 4 Some research supports negative consequences across a range of outcomes, which also include quality of play, parent–child interactions, academic outcomes, executive functioning, language acquisition and sleep, in addition to compromised privacy and exposure to unsafe content.5–7 Other research points towards notable benefits. For example, a systematic review conducted by Kostyrka-Allchorne et al,7 concluded that exposure to digital educational content during early childhood improved academic skills and predicted positive academic performance in later childhood. A meta-analysis by Madigan et al8 found that while longer duration of screen use was negatively associated with child language, high-quality screen viewing (i.e., educational content, coviewing with caregivers) was positively associated with child language skills. Additional benefits of digital media exposure include increased social contact and support, access to health information, and relationship benefits related to shared digital play.6 9 These studies, often widely covered in the news, receive great scrutiny from the scientific community, where a spirited debate currently resounds.10 11

    One frequent and important criticism surrounds measurement paradigms that fail to capture the complexity of digital media usage, for better or for worse. Indeed, the state-of-the-science requires a move beyond “screen time”, and towards a conceptualisation of digital media as it permeates the various contexts in which children and young people develop. In keeping with systemic formulations of the developmental ecology,12 and expounding on the ideas of “levels of analysis” in developmental psychopathology (e.g., genetic, neurophysiological, individual, family, school, neighbourhood)13 and frameworks for children’s digital safety,14 our scoping review calls for measures that capture the “digital level of analysis” as a unique and distinct layer of organisation in which digital developmental phenomena can be conceptualised, measured, modelled, and studied in order to best understand the influences and consequences of child well-being in the digital age.10 15

    The need to develop and disseminate reliable, valid, and comprehensive protocols to measure digital media usage in children, adolescents and families has been clearly articulated.16–18 The development of such tools is rife with challenge, including debate pertaining to the definition of constructs, inconsistencies in targets for measurement (e.g., hours of screen time vs specific types of screen time) and a relatively new phenomenon compared with other domains of developmental science (e.g., relationships, parenting, psychopathology). The questions of “what is ‘screen time’ and ‘digital media use’, and how do we measure them?” remain as obvious, yet unanswered, areas for consideration.10 Indeed, studies considering the putative developmental consequences surrounding the amount of screen usage (i.e., screen time as a crude exposure variable) have yielded provocative findings, though interpretation of these studies have also yielded gross limitations in measurement. Content of media, context of usage and co-occurring developmental phenomena and exposures are important yet unaddressed areas in many studies’ measurement protocols.

    This scoping review will review and synthesise existent literature on measurement of digital media usage in children, adolescents and young adults, while clarifying conceptual, definitional and methodological challenges present in research and assessment, particularly in the areas of developmental science, psychology/psychiatry and paediatrics. The current project was initiated in hopes of further detailing the nuances of digital media use, in order to address concerns surrounding the imprecision of currently documented associations between “amount” or “duration” of time spent using screen devices (i.e., “screen time”) and developmental outcomes.19 20 The review was developed, designed, and conducted through a collective effort of over 30 developmental scientists, psychiatrists, paediatricians, psychologists, social workers, caregivers, and other stakeholders, all highly interested in advancing research and practice with children and youth in a digitally mediated world that is constantly evolving. For more information on how this project was initially formulated, please refer to the published protocol.15

    Objectives

    This scoping review aims to (1) identify core concepts in digital media use in children, adolescents, and young adults, (2) map existing research paradigms and measurement tools that operationalise and quantify these key dimensions, and (3) provide integrated findings and suggestions that will be informative to future measurement efforts. Results are intended to inform the development of a “large scale psychometric initiative that seeks to develop a reliable, valid, utilitarian and widely employed suite of instruments that can be deployed by clinicians and scientists to screen, monitor and measure media habits in children and adolescents”.15 Like the review itself, this effort is similarly being championed by the Media Impact Screening Toolkit (MIST) workgroup and backed by Children and Screens: Institute of Digital Media and Child Development. To advance the field, it is critical that constructs are consistently defined, and reliable measurement tools are developed.21

    Methods

    Protocol and registration

    The protocol for this scoping review is published in BMJ Open and accessible at the following address: http://dx.doi.org/10.1136/bmjopen-2019-032184.

    Eligibility criteria

    To be eligible for inclusion, the source was required to: (1) be an empirical investigation or present original research, (2) investigate a sample/target population that included children or young persons between the ages of 0 and 25 years of age, and (3) include at least one assessment method for measuring digital media use. Reviews, editorials, letters, comments and animal model studies were all excluded. The use of this criteria was to ensure the investigation was of empirically validated measurement tools that specifically targeted digital media usage in children, adolescents and young adults. To avoid duplication of research findings, we excluded reviews and only included sources conducting original research.

    The search for sources that met these criteria was limited to English language sources published in the 5 years preceding the start of the project (i.e., 1 March 2014 to 2 March 2019; Note, there was a delay in completion of this project associated with the COVID-19 pandemic). This criterion was selected based on feasibility (i.e., number of studies), in addition to capturing the historical recency of modern digital media in scientific research. The research team conducting this review spoke English and limiting the years reduced the amount of sources meeting inclusion/exclusion criteria to a viable number for a single scoping review. Originally, this project aimed to include sources published since 2007 (the year the iPhone was released). However, this yielded far too many results, including some that were outdated (e.g., measurements of MySpace usage). Since this review aims to conceptualise the measurement of child, adolescent, and young adult digital media use in the present technological landscape, this time restriction should not present any bias or systematically alter the findings, while maintaining modernity.

    Patient and public involvement

    This review did not include the involvement of human research participants (nor patients or the public). However, it was motivated by the observed clinical need for a greater understanding of the current landscape of measurement tools that may be applied in practice settings when working with patients and members of the public. It is anticipated that the results of this review will inform utilitarian, feasible, and widely used frameworks and tools, supporting better and more accurate identification of problematic digital media use in children, adolescents, and young adults. Moreover, the results of this work will be publicly distributed via the provision of healthcare that incorporates the findings from this research.

    Information sources

    The search for relevant sources was conducted using the following databases: Ovid MEDLINE, PsycINFO and Scopus. The most recent search was executed on 9 July 2019 for sources published between 1 March 2014 to 1 March 2019. Grey literature was obtained from knowledge experts and organisations relevant to digital media use among children, adolescents, and young adults in the form of reports or original measurement tools. This search strategy for grey literature followed guidelines from the Cochrane Handbook, Centre for Reviews and Dissemination and the Canadian Agency for Drugs and Technology in Health “Grey Matters”.

    Search

    A detailed search strategy was designed by an expert librarian and information specialist at the University of Waterloo who is a co-author on this manuscript (JS). The comprehensive search strategy consisted of author keywords and subject headings that were combined with Boolean terms “AND” and “OR” and “NOT”. Please refer to online supplemental appendix A for the search strategy used for MEDLINE. Similar search strategies were conducted in PsycINFO and Scopus.

    Selection of sources of evidence

    Database sources

    Once database sources were retrieved and duplicate sources were removed, the remaining sources were uploaded into Covidence, an online systematic review management software. In Covidence, titles and abstracts of database sources were reviewed independently by two trained reviewers and were marked for inclusion, exclusion or requiring further review based on the eligibility criteria. This was phase 1 of the screening processes. Discrepancies were resolved by an expert reviewer based on an independent review of the source (inter-rater reliability, IRR=0.81).

    Database sources deemed to meet eligibility criteria or requiring further review proceeded to the second screening phase: full-text review. During this stage, sources were reviewed independently and in duplication to the first screening to ensure inclusion based on the eligibility criteria. Once again, an expert third reviewer solved conflicts in eligibility evaluation during the second phase of screening based on an independent review. Data extraction was performed on all sources evaluated as meeting all the criteria for inclusion.

    Grey literature sources

    Grey literature sources were collected and stored manually in an online shared-access folder system. Once duplicates were removed, basic information (e.g., source title, authors, retrieval information) was recorded in a Microsoft Excel spreadsheet for tracking purposes. Using separate copies of the spreadsheet, two trained reviewers accessed each grey literature source and independently evaluated the source’s eligibility for inclusion. Evaluations were recorded on each reviewer’s spreadsheet, which were then compared for disagreements. Conflicts were resolved independently by a third trained reviewer using a third copy of the spreadsheet with the discrepancies flagged prior. Data extraction was then performed on all sources evaluated as meeting all the criteria for inclusion.

    Data charting process

    Data extraction for each source was performed using forms completed online via Qualtrics. Two trained, independent reviewers manually extracted data from each source and input the data into the Qualtrics form. Once data extraction was completed for a source, each reviewer would indicate this in Covidence (database sources) or a shared Microsoft Excel tracking sheet (grey literature sources). Following recommendations for the conduction of scoping reviews, this data charting process was pilot tested on 20 articles to ensure consistency between reviewers and determine overall functionality.22–25 With the pilot test yielding satisfactory reliability (IRR=0.68), minor modifications were completed in the coding manual to improve construct and response option definitions, at which point IRR increased to 0.81. Once data charting was completed, the data were exported from Qualtrics into Microsoft Excel. The two extractions for each source were then compared and discrepancies were flagged. A third trained reviewer then reviewed these discrepancies, in consultation with the original source, and inputted the final value into a consolidated case for each source. These consolidated cases were then exported to SPSS for data analysis.

    Data items

    Following recommendations from the Joanna Briggs Institute,24 basic study information was collected for each source including title, author(s), institution(s), email(s), year of publication and country of origin. Publication type (e.g., article, report, other) was also collected. As mentioned above, level of risk of bias was measured in the form of counts for number of low, high, and unclear judgements listed in Covidence.

    For study methodology, the following codes were extracted: setting (lab, clinic, in-home, school, online, etc.), paradigm (naturalistic observation, lab observation, survey, ecological momentary assessment, etc.), and data type (qualitative, quantitative, mixed-methods). Information on the dimensions of digital media use for each source was also collected: digital media type (video games, internet browsing, social media, communication, video streaming, etc.), and devices (laptop/computer, cellphone/smartphone, tablet, television, etc.) were recorded, along with any verbatim definitions of media interaction stated by the researchers.

    Since this scoping review was interested in exploring the nuances of digital media use, style of engagement with digital media usage was measured. This included whether the usage was active or sedentary, online or offline, solitary or shared, educational or non-educational, and productive (media usage tasks that yield new resources or improve skills) or consumptive (media usage tasks that do not yield new resources or improve skills). For sources where these characteristics were not explicitly stated, these variables were marked as “unknown/unclear.” Additionally, the specific applications or websites (e.g., Facebook, YouTube, Instagram) referenced in each source were also recorded.

    Details on the sample characteristics for each source were measured. This included sample population’s age group(s) and mean age, sample size, any targeted populations, race (%), ethnicity (%), income level (e.g., socioeconomic status) and the index type used for this calculation. Recruitment methods used to obtain the sample population were recorded, including public advertisement, internal advertisement, direct recruitment of known or unknown participants, and other methods.

    After collecting these variables in relation to the sources/studies, the measurement tools, themselves, were assessed. Measurement tool name was recorded, in addition to the measurement type (e.g., survey items, structured interview, video or audio observation, automated statistics, experience sampling), any targeting of the tool to a specific population, and informant type (e.g., self-report, mother or father report, joint parent report, unspecified parent report, teacher report, clinician report). Verbatim information on measurement tools’ reliability, validity, strengths and areas for growth were also collected.

    Lastly, each measurement tool was assessed by reviewers in terms of reliability and validity with judgements of poor, fair, or good, depending on the researcher(s) discussion of psychometric properties and the evidence provided. Reliability was evaluated based on the following metric: good (clear evidence of all forms of reliability, where applicable, and/or numerical data is presented and >0.70), fair (some discussion and evidence of reliability in one domain but not all and/or reliability statistics are presented but are <0.70) and poor (little to no discussion of the psychometric properties pertaining to reliability). Similarly, validity was evaluated with the following metric: good (clear evidence of all forms of validity, where applicable and/or numerical data is presented and >0.70), fair (some discussion and evidence of validity in one domain but not all and/or validity statistics are presented but are <0.70) and poor (little to no discussion of the psychometric properties pertaining to validity).

    Critical appraisal of individual sources of evidence

    Methodological quality and study bias were assessed prior to data extraction in Covidence. Based on the series of judgements proposed by Cochrane, four areas of risk were assessed for in each database source: (1) random sequence generation and allocation concealment (e.g., Does the study avoid selection bias by randomly assigning participants into conditions? Is this assignment concealed to researchers and participants?); (2) blinding of participants and personnel (e.g., Was group membership known to the participant? To the research personnel? Is being blind to condition/group essential to the integrity of the study?); (3) incomplete outcome data (e.g., Is the outcome data for all participants available for review? Is missing data and attrition reported by the researchers? How much data is missing? Why is it missing? How was the data analysed in response to the missing data?) and (4) selective reporting (e.g., Do the researchers only report on statistically significant results? Do the researchers only focus on results that support their hypotheses? Do the results differ from the protocol/methodology?).26

    Each area of risk was judged as being low risk, high risk, or unclear risk, based on specific definitions for each area as proposed by Cochrane.26 Two reviewers rated level of risk for each source based on these definitions. If a conflict occurred, it was solved with a blind third review. This process of risk assessment was included in the initial pilot testing of 20 sources and, following modifications, satisfactory IRR was achieved (IRR=0.81). The number of judgements in each risk level were then recorded for each source at the beginning of data extraction. Any sources that were judged as low risk in all four domains were classified as low risk, those that had any number of unknown domains were classified as moderate risk and those with any domains that were categorised as high risk were considered high risk, overall. Sources evaluated as being at a high risk for bias were considered with caution in the data synthesis stage and are flagged in the results (see online supplemental appendix B, table 1; appendix C, table 1.

    Synthesis of results

    Once data charting had been completed and discrepancies were resolved, all consolidated cases were exported to SPSS V.26 for data analysis. Due to the nature of our investigation, our data analyses were purely descriptive. All categorical variables were analysed for the frequency of each response; many variables were dichotomous, and others had non-mutually exclusive response options. Several items that had alternative response options were re-coded based on inter-rater agreement when the classification by previous reviewers was inappropriate.

    For variables with qualitative response options (e.g., Verbatim Definitions of media usage), the responses were thematically analysed and then categorised based on relevant domains. Qualitative and quantitative descriptions are included for these variables within the results section. Sources were assigned a unique “Source #” for identification across multiple tables of information that were created from the data extraction.

    Results: database sources

    Selection of sources of evidence

    The selection of sources is detailed using a flow diagram based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Scoping Reviews guidelines in online supplemental appendix D. The search strategy originally yielded 6459 database sources. After being reviewed for duplicates, 4274 were uploaded to Covidence and a further 57 duplicate sources were removed. The remaining 4217 sources were then screened in Covidence. Stage 1, title and abstract screening, resulted in 4069 database sources being deemed irrelevant and excluded from the study.

    During the second screening phase, full-text review, 22 sources were excluded for the following reasons: the source failed to develop a measurement tool of digital media use (9), the full-text was not available in English (8), the tool(s) measured irrelevant factors associated with digital media use (e.g., exposure to violence; 2), the age of participants was not stated (1), the research was preliminary and did not include full data analyses (1), or the source was a duplication (1). Following this phase, 126 database sources were evaluated as meeting eligibility criteria and were moved on to phase three for data extraction. From these database sources, 145 measurement tools were identified. Reference information for all final included sources is listed in online supplemental appendix E.

    Characteristics of sources of evidence

    Information on all database sources’ study characteristics is listed in online supplemental appendix B, table 1. Sources are identified with a unique “Source #” to allow for matching of information across tables 1 and 2 (measurement tool characteristics; online supplemental appendix B). Information in these tables is chunked based on the measurement tool’s name.

    Study characteristics

    Overall, 145 measurement tools were identified across 126 database sources. All the selected publications are classified as empirical articles. Most studies were conducted in Europe (60%) and Asia (26.21%); the remaining 13.9% were conducted in North America (6.90%), South America (2.76%), Australia (1.38%), Africa (<1%), and intercontinental (1%). Further, 10.34% of studies were conducted in multiple countries. The countries/regions with the highest number of sourced publications were Spain, China, Germany and Turkey and the UK. The sample included studies that were conducted in numerous settings including schools (56.55%), online (36.55%), in clinics (3.45%), in homes (9.66%), communities (<1%) and other environments (e.g,. after school programmes, focus groups, gaming halls and hospital based research centres; 2.76%); a small percentage of studies did not specify the research environment adequately enough to code this domain (6.21%).

    Quantitative data analysis was the predominant measurement type (91%), with the remaining studies (9%) utilising mixed methods. No studies implemented purely qualitative analysis. Paradigms for each study are listed in online supplemental appendix B, table 1.

    Population demographics

    The range of participants’ mean age in the included database sources was 1.61–43 years. Note, the upper-bound of the age demographic is beyond the upper-bound intended in the scoping review, as some studies included both young people and adults. The age demographics of the database sources sample were as follows: Infancy (Birth −23 months; 1.38%), preschool age (2–5 years old; 1.38%), school age (6–12 years; 35.86%), adolescence (13–17 years; 77.24%) and young adulthood (18–25 years; 74.48%). Sample size varied considerably across samples (mean=1526, range=7–21 205). Each sample size grouping was as follows: Under 100 (4.83%), 101–499 participants (25.52%), 500–999 participants (27.59%), 1000–2499 participants (28.97%), 2500–4999 (10.34%), over 5000 participants (2.76%).

    Interestingly, most reported studies (75.17%) did not include any information about the racial profiles of their participants. Of the studies that reported this information, East Asian participants (10.34% of studies) were the only racial group reported in over 10% of studies. Race and ethnicity profiles (where reported) for each individual study are included in online supplemental appendix B, table 1. A handful of special populations were also studied across the selected articles including: people who play video games regularly, Chinese youth, gamers (including internet gamers), treatment-seeking children with Internet addiction and/or smart phone overuse, people who play Massively Multiplayer Online Role Playing Games (MMORPGs), parents with ambulatory toddlers, Facebook users, individuals with problematic online gaming and Japanese speaking individuals. The SES profile of the selected studies was as follows: Diverse SES (13.10%), high/middle SES (6.21%), low SES (<1%), not specified (80%). In studies where SES was assessed, 75% utilised an author-derived scale and 25% used a common index (ie, an index that has been empirically tested and validated for use in that country/region).

    A variety of recruitment methods were used across studies including: public advertisements (8.28%), internal advertisements (17.93%), direct recruitment of unknown individuals (58.62%) and direct recruitment of known individuals (6.9%); the remaining studies used an alternative or unknown recruitment method such as convenience and/or snowball sampling, purposeful sampling, internet-based, simple random sampling, national school surveys from existing databases, online sampling from 25 European countries, and sampling by social studies companies/market research panel (20.69%).

    Critical appraisal within sources of evidence

    Overall, 74.48% of the selected studies were considered to have a low risk of bias, with 11% moderate risk (where level bias was unclear), and 14.48% high risk. Each source’s level of risk is listed in online supplemental appendix B, table 1, flagging the sources considered high risk.

    Results of individual sources of evidence

    Information on the measurement tools is listed in online supplemental appendix B, table 2.

    Digital media characteristics

    Digital media type

    A myriad of digital media types were reported in the sampled studies: internet (37.93%), video games (34.48%), television (TV)/video (11.72%), social media (14.48%), communication (11.72%), other (7.7%), e-books (2.07%), virtual reality (<1%); 15.17% of studies had unknown or unspecified digital media types that were assessed in the study. About one-fifth (21.38%) of studies directly assessed more than one digital media type. Of those classified as other (5.52%), the following were included: MMORPGs, Digital Video Discs (DVDs), internet and/or computer games, looking at digital photographs, playing with apps based on sound-image associations and playing with puzzles.

    Device type

    Approximately one-third of studies included multiscreen composites with varying devices (34.48%) and/or phones (27.59%); a smaller percentage of studies also assessed the use of laptops or computers (11.72%), gaming consoles (7.59%), TV (6.2%) and tablets (2.76%). Notably, many studies (40.69%) were unclear in this regard or did not fully specify the devices included in their assessments of screen time use.

    Active or sedentary

    Regarding media characteristics: 1.38% of studies included both active and sedentary use, 15.86% were classified as sedentary use (non-physical interaction with the digital media) and 82.76% of studies did not clearly specify whether the media use was active vs sedentary. No studies were classified as solely assessing active internet use.

    Online or offline

    Regarding online use, 48.97% of studies assessed online or media use involving the internet, <1% of the studies assessed solely offline media use and 23.45% of studies assessed both online and offline media use. Approximately one-quarter (26.90%) of the included studies did not specify.

    Solitary or shared

    It was also of interest to explore whether individuals used screens alone or in connection with others: 4.83% described solitary and shared screen use either in person or online, 1.38% described solitary and shared use that was online only, <1% described shared use in person only (i.e., coviewing), 3.45% described shared use online only, 2.07% described solitary use only, and, importantly, 87.59% of studies did not specify if media usage was solitary or shared either online or in person.

    Educational content

    Most studies (63.45%) did not report if media use involved educational content (i.e., it is unknown whether these tools measured educational content or not). Of those that did report on this construct (53 studies), 15.1% of studies did assess educational content and 84.91% explicitly stated their measure did not assess educational content.

    Productive or consumptive

    With reference to type of media use, 36.55% of studies included consumptive media use, 6.21% studied both productive and consumptive media use, no studies assessed solely productive use, and 57.24% of studies were unclear in this regard.

    Specific websites and applications

    A small number of studies investigated and/or specified which applications were being included in measurements. The following platforms were considered: Facebook (8.97%), Facebook Messenger (2.07%), WhatsApp (4.14%), Twitter (2.76%), Instagram (1.38%), Skype (<1%), Snapchat (<1%), Youtube (1.38%), all of the previously mentioned (6.21%), other or unknown (28.97%), including online forums, Reddit, Internet gaming, Facebook games, OoV oo, Viber, Omegle, Chatroulette, Skout, 6rounds, Tuenti, videogaming, WeChat, QQ, Sina Weibo or other forms of social media.

    Characteristics of measurement tools

    Targeted population

    A handful of tools were targeted towards a specific population (16.55%—listed in online supplemental appendix B, table 2), though most tools were considered universal measurement tools (82.76%), and <1% of studies were unclear in this regard.

    Measure format

    Nearly all the selected tools (97.24%) were validated in the context of basic survey methodology, though some studies also made use of automated statistics, ecological momentary assessment, structured interviews with focus groups, among others. The main data collection methodology across studies was self-report (92.41%), followed by passive data collection (3.45%), and unspecified parent report (3.45%). The remaining respondent types included clinician report (1.38%), mother report (1.38%), father report (1.38%), observation (<1%), joint parent report (<1%) and other (1.38%).

    Psychometric properties

    Reliability of sources was mostly satisfactory with the majority of sources being assessed as having good reliability (66.21%), some having fair reliability (15.17%) and a small number having poor reliability (4.83%). Validity was also evaluated as being mostly satisfactory, with the majority of sources having good validity (61.38%), some with fair validity (17.93%), and a few with poor validity (4.14%). A handful of studies were unclear regarding reliability and validity (13.79% and 16.55%, respectively).

    Constructs

    By title, 80% of tools claimed to be assessing abnormal screen usage (such as excessive time spent using a device), with definitions ranging from risk factors to clinical diagnoses for conditions such as internet addiction and compulsive internet use. Further, 13.10% of tools assessed general everyday use of screens and content exposure (i.e., non-pathological use). The smallest pool of tools (6.90%) assessed screen time as a component of overall healthy living and general health behaviours.

    Cross-cultural validation of tools

    About one-in-five tools (22.07%) were studied as cross-cultural validations of the following adaptations: Portuguese, Italian, German, Brazilian, Turkish, Polish, Greek, Vietnamese, Persian, Arabic, Spanish, Korean, Japanese and British.

    Measurement tool strengths and areas for growth

    Notable areas of strength and areas for growth (where applicable) are thoroughly detailed in online supplemental appendix B, table 2. The following section will describe various patterns that emerged across papers. Numerous strengths were identified across certain studies including novelty in data collection methodology (ecological momentary assessment), assessment modality (phone use) and populations of interest (special populations, both clinical and non-clinical). Further, numerous studies provided a high level of specificity regarding the factor structure of various constructs in this domain (compulsive internet use), while several tools emphasised their alignment with Diagnostic and Statistical Manual-5 (DSM-5) diagnostic criteria for Internet Gaming and related disorders. Importantly, several studies also demonstrated an effort to establish multiple types of reliability and validity within their sample(s). Lastly, numerous studies also highlighted the brevity of their tools, along with ease of administration and interpretation (related to feasibility).

    There were also notable areas of growth for the development of future measures, or the refinement of existing tools. Assessments for young children (especially under 5 years of age, but also 6–13 years of age), the inclusion of educational or other content designed to promote development, tools considering shared usage in-person (i.e., coviewing) or online, assessments for entire families, utilisation of data collection methods other than self-report (e.g., observational and passive-data collection), validation of clinically oriented tools in clinical samples, expansion of the construct universe (i.e., content and construct validity) beyond duration of screen media exposure, and minimal tools targeted towards under-represented groups (with the exception of the cross-cultural validations) were the largest areas of need.

    Regarding content and construct validity, there was concern surrounding the inclusion of recent technological developments (e.g., social media networks, online gaming and virtual or augmented reality). Furthermore, several domains were inconsistently highlighted as strengths of certain studies/tools and areas of improvement for others, such as: the ability to differentiate between clinical and non-clinical levels of impairment and/or compulsive screen-time use, specificity in symptom identification, assessment of motives for screen use and modalities of screen use, psychometric qualities, the ability to compare between adolescent and parent report and successful cross-cultural validations.

    Synthesis of results

    Narrative conceptualisation of digital media use

    The verbatim definitions of media usage were compiled from all studies. Several themes emerged: 34.40% of studies defined use in terms of frequency, quantity, and duration of use. This typically included defining problematic use as excessive, recurrent, or beyond what an individual intended. Several studies also quantified the number of messages an individual sent, data usage on cell phones and number of hours of video game play. One study also asked participants to report on non-educational or non-professional screen-time only to specifically assess recreational usage.

    Approximately half of the included studies (52.00%) described use with terms that identified clinically significant criteria, including terminology surrounding “addiction” and “dependence”, in addition to the reliance on diagnostic criteria. Studies that included descriptions highlighting overuse or problematic use, without clinical terminology were not included in this calculation. There was variability in studies surrounding the definition of disorder and acknowledgement of the presence of addictive processes. Some authors characterised problematic digital media usage as a behavioural addiction and others as an impulse control disorder. Further, numerous papers highlighted the similarities between substance abuse disorder and non-substance (i.e., behavioural) addictions, as a clinical profile for problematic technology use in the absence of formalised diagnostic criteria. By emphasising the presence of addiction, numerous papers also highlighted overall distress and/or impairment that was clinically significant. Notably, the following statement by Komnenić et al27 undergirds a prevalent challenge in this research:

    Internet addiction is not a homogeneous construct; rather it includes different dysfunctional activities performed online that may or may not manifest themselves simultaneously (e.g., video game playing, cybersex, social networking, online gambling) (p.131–132).

    Interestingly, in their definitions of digital media use, 8.80% of studies identified hypotheses regarding the addictive nature of screens and provided a rationale for potential overuse. These included behavioural theories regarding escapism and the maladaptive tendency to seek out screens to alleviate negative emotions and neurobiological comparisons between addictive behaviours surrounding technology and substance use disorders. Additionally, under this umbrella, Pontes et al28 mentioned several overarching theoretical paradigms, including the cognitive behavioural and social cognitive models.

    Regarding clinical nomenclature, there was substantial variation across studies, which was a limitation consistently acknowledged by researchers. Both generalised and specific labels were used to describe digital media usage with regard to specific platforms and modality of use, including internet gaming disorder (IGD), social networking addiction, internet addiction, mobile phone addiction and Facebook addiction, among others. Several studies also made distinctions between internet addiction as the most severe manifestation of clinically relevant difficulties, and problematic internet use as less severe in terms of the degree of dependency, the nature, presence and number of symptoms and the total time and types of use (relative to normative patterns). A handful of studies also distinctly made the argument that difficulties with digital media use and addiction are reflective of an underlying impulse control disorder, while others categorised difficulties in this domain as a unique cyber or technological addiction. The most common terminology that was used across studies was mention of compulsive/problematic use, IGD and internet addiction.

    Digital media use symptomatology

    A small number of studies (1.60%) explicitly asked participants to self-report their subjective opinions of whether they overused screens to assess for clinically significant problems without objective symptom descriptions, per se.

    The most prevalent theme involved a description of symptoms and consequences associated with digital media usage (mentioned in 57.60% of studies). Notably, this was slightly more prevalent than descriptions of clinical diagnoses or formal identifications of pathology as mentioned above, though most studies that provided symptom profiles also had accompanying labels of clinical impairment.

    A myriad of symptoms were mentioned across papers, including: loss of control, preoccupation with screen time/device use, withdrawal, tolerance, unsuccessful attempts and/or the inability to stop, loss of interest in typical activities, overall impairment to one’s health, relationships, occupational functioning and/or limitations to psychosocial functioning, habitual checking, experiencing an urgency to use and/or check the device, dependency, increased use despite the desire to stop, experiencing irritability and restlessness when unable to use devices for social purposes, depression, anxiety, school withdrawal and reduced quality of life, among others. Numerous studies used the nine DSM-5 criteria specified for IGD; however, studies varied with respect to the use of a formalised set of symptoms.

    Purposes of digital media usage

    With respect to the purposes of digital media use, several prominent domains were identified across studies (though not all studies specifically detailed the domains of use). Specifically, 22.40% of papers highlighted the use of screens for social interaction and relationship building in their definitions. This included defining digital media use for the purposes of instant communication, maintaining and creating new friendships and collaborative video-game play. Further, 28.80% of papers highlighted the use of screens for the purposes of gaming, including both computer and video games, gaming with others and (presumably) gaming individually across online and offline platforms. Lastly, 4.00% of studies emphasised the use of screens for online sexual activities including the use of pornography and online chatrooms, among others. Notably, our search criteria did not specifically target usage for pornography and sexual activities.

    A small percentage of studies (5.60%) reported the possible benefits that can be gleaned from screen time use, including educational, relational and professional advantages. However, these were usually mentioned with the caveat that, despite the advantages that screens allow, overuse can lead to problems and unwanted side effects.

    Issues with conceptualisation and our understanding of digital media usage

    Many studies acknowledged that digital media use is inherently complex, multifaceted, and multidimensional, and that their purported instruments were only designed to capture a dimension of an otherwise vast and expansive psychological and behavioural construct. Challenges associated with the ubiquity of devices and the plethora of media activities available were articulated, including the tremendous challenge of neatly isolating these components for analytical purposes. Measure developers have acknowledged that tools have not well captured the simultaneous or multipurpose use of screens or devices. For example, gaming can also include socialising (in the case of online games where young people interact with friends), while also including educational content. Similarly, measures were limited in their capacity to capture simultaneous usage for purposes that are either complementary or in opposition. For example, a young person may be using a word-processing software for homework, while streaming YouTube videos that are related to the project, and intermittently using multiple platforms on a smartphone (eg, TikTok, Snapchat, Facebook Messenger) to connect with peers who are involved in the group project, and others who are not. Furthermore, this youth may have problematic internet usage, commensurate with patterns of withdrawal or other criteria outlined by diagnostic criteria, while another youth who is presently engaged with the same devices may not present with any impairment. Lastly, the two hypothetical youth may live in homes with vastly different norms and rules around digital media usage, further contextualising the nature of their difficulties. Such complexities punctuate the obvious need to move beyond screen time as a meaningful metric, and towards multipurpose measurements that consider digital media usage across layers of analysis.

    Results: grey literature sources

    Selection of sources of evidence

    The primary source collection yielded 28 grey literature sources from knowledge experts and handsearching of organisations within the domain of digital media and child development. Sources were screened for duplicates and three were removed. Due to the nature of the grey literature, title and abstract screening was omitted, and full-text review was completed exclusively. After review, 11 sources failed to meet the inclusion criteria and were removed from the study. Reasons for inclusion included: source was published outside of inclusion dates (7), the tool(s) measured factors outside the scope of the present review (eg, news exposure; 3), or the source failed to develop a measurement tool of digital media use (1). Following exclusions, 14 grey literature sources were evaluated as meeting our inclusion criteria and were included in the study. From these, 17 measurement tools were identified. Reference information for all final included sources is listed in online supplemental appendix E.

    Characteristics of sources of evidence

    Grey literature sources’ information is listed in online supplemental appendix C, table 1, with measurement tool information listed in online supplemental appendix C, table 2. Again, “Source #”is matched across tables.

    Study characteristics

    All the selected grey literature publications were agency or institutional reports with attached questionnaires, with the exception of one source being solely a questionnaire. Therefore, 13 independent studies were identified across 14 grey literature sources. The majority of sources collected data in the USA (78.57%), were conducted online (71.43%), and used quantitative data analysis (78.57%), and national survey methodology (92.86%). Study characteristics are listed in online supplemental appendix C, table 1.

    Population demographics

    Sample size ranged from 743 to 4594 participants, with a mean sample size of 1630. One source did not report sample size. No mean age of participants was reported across all grey literature sources. However, the dominant age demographic assessed was adolescence (71.43%). The majority of reports did not describe race or ethnicity of participants (67.86%). Of those that did (32.14%), similar representations of race were reported (i.e., predominantly White, followed by Hispanic, then Black). Half (50%) of sources reported on a sample diverse in socioeconomic status, with majority of assessments constructed by the authors (64.29%). All reported recruitment methods were direct recruitment of unknown participants (85.71%), with the remaining sources failing to mention recruitment methodology.

    Critical appraisal within sources of evidence

    Almost all the included grey literature sources were assessed as having low risk of bias (92.86%), with the remaining source determined to be of moderate risk due to a lack of information (the source was solely a questionnaire).

    Results of individual sources of evidence

    Information on the measurement tools identified in the grey literature sources is listed in online supplemental appendix C, table 2. All grey literature sources did not explicitly discuss strengths and limitations of their measurement tools.

    Digital media characteristics

    Social media usage was the most assessed digital media type (92.85%). Other common types of digital media (e.g., video games, communication, TV/video streaming, and internet use) were all assessed in majority of sources (71.43%–78.57%). Online supplemental appendix C, table 2 lists all digital media types measured in each source. Unlike the database sources, the grey literature measured aspects of digital media use related to apps, art creation and work/schoolwork. Cellphone/smartphone was the most assessed device (92.85%), followed by laptops (64.29%), tablets (57.14%), and gaming consoles (57.14%). The grey literature sources also assessed smart toys (21.43%), which were not measured in the database sources.

    Regarding usage characteristics, the following were investigated: active and sedentary use (7.14%), online use (100%), offline use (85.71%), solitary and shared use (7.14%), educational content (50%) and productive and consumptive use (71.43%). Measurement of specific website and application usage was largely unreported (50%). Assessments of Snapchat and Instagram use were the most prevalent (42.86% each). The grey literature also investigated distinct streaming services (as opposed to a collapsed category) and specific kids’ gaming sites. These areas and applications were not assessed in the database sources.

    Characteristics of measurement tools

    All the grey literature measurement tools were universal and validated in the context of basic survey methodology (100%). For respondents, self-report was most prominent, existing in seven sources (78.57%), of which four sources (28.57%) also included parent-reporting in some form. The remaining three sources (21.43%) collected responses from parents only. Psychometric properties of the measurement tools were not discussed in any of the grey literature sources.

    Discussion

    Summary of evidence

    The purpose of this scoping review was to evaluate extant measures of digital media use and related constructs in children and adolescents, while highlighting important areas for growth and advancement in the domain of digital media measurement in developmental science. Two key findings emerged. First, many measures exist that are mostly individual or caregiver report, particularly for adolescents and young adults, with a focus on problematic digital media overuse. Second, our findings speak to the need for an integrative suite of high-quality instruments that are widely used across research laboratories and methodological settings, specifically in regard to tools that are multilevel (consider digital media use across the developmental ecology), multi-method (include self-report and other forms of data capture), and multi-informant (assess the perspectives of multiple persons, including the discrepancy between child and adult perspectives as being clinically informative).

    There have been numerous calls for advancement in the measurement domain for developmental media research.16–18 Findings from the present scoping review have clearly delineated the nature and extent of this problem. Researchers should be applauded for advancing the field to its present form, largely through the employment of caregiver and self-report measures of “amount” of digital media use or problematic use, and in the context of advanced inferential statistical models—the kinds frequently used in public health, epidemiology, psychology and other areas of the medical and social sciences. Similar advances have been observed in developmental science, particularly with the usage of clever observational and laboratory paradigms.29 30 That being said, the field appears to be approaching an impasse. It is unlikely that replicable discoveries will emerge from an area where there is such little consensus around appropriate measurement methodology, including fundamentals of psychometric theory such as content and construct validity. Thus, the 30 authors of this review process, along with all members of the workgroup, call for the development of a widely employed set of instruments that can be used across multiple laboratories, including those with disparate views around the risks and benefits of digital media usage.

    Large scale and centrally funded consensus exercises in construct validity and psychometric measurement have been employed elsewhere in developmental science and psychiatry. The result of these frameworks has been a high level and constructive debate that supersedes the methodology of any study (or investigator), and instead integrates studies and (non-)replication into a meaningful and coherent scientific dialogue. For example, the Research Domain Criteria championed by the National Institutes of Mental Health have advanced the fields of psychiatry and neuropsychology beyond that of the DSM framework. Relatedly, and perhaps more specific to the present review of measures, the National Institutes of Health (NIH) demonstrated outstanding leadership in the funding and development of a series of state-of-the-art psychometric tools in the NIH Toolbox and related suites of instruments. The comprehensive development and maintenance of these instruments has been championed by healthmeasures.net via NIH funding mechanisms. Given the success of these instruments, the members of the MIST call for a similar exercise in the domain of digital media use, particularly in childhood and adolescence, but also across the life course. To support this initiative, the strengths and limitations of the present measures are described.

    Strengths and limitations of measures

    The most obvious area of strength for the existing measures is face validity. This likely stems from the major concerns among professionals, parents, and the public with regard to the amount of media being consumed or used by young people. Accordingly, investigators have demonstrated considerable zeal in tackling issues pertaining to the frequency and duration of media use, in general, in addition to pathological behavioural repertoires that putatively emerge in the context of such usage patterns. Moreover, these self-report and caregiver report instruments have demonstrated highly feasible. The use of traditional survey responses (including Likert scales) in the context of study protocols has allowed the field to advance in terms of the number of researchers and studies employing these methods. That said, there is often a tradeoff between measurement feasibility and quality. Thus, the reviewed instruments perform poorer in terms of content and construct validity.

    Excepting the examination of online versus offline use, which is a more recent undertaking, many tools do not explore critical domains such as active vs sedentary, shared vs solitary (e.g., coviewing, social video game play), and productive or consumptive use. Indeed, the measurement of many studies (including some of the authors’) would not satisfactorily disambiguate 1 hour of playing a first-person shooter game, from computer programming for leisure, from homework on a computer. There are also distinctions that may fall on disciplinary lines and biases (e.g., paediatricians, clinical psychologists and psychiatrists who have been concerned with problematic overuse due to real-life clinical encounters informing research, compared with educational psychologists and researchers of pedagogy who are interested in media for learning). Of great relevance to the reductionist dispute surrounding whether digital media is harmful or helpful, educational content or other development-enhancing content is largely omitted in the measures that were included in the present scoping review.

    Another construct validity issue from the current study has emerged in the realm of behavioural addictions. There have been several recent commentaries to better consider digital media and internet overuse, including a recent proposal for distinguishing a “primarily mobile” from a “non-mobile” internet addiction.31 32 While not the focus of the present study, most measurement tools explored clinical diagnoses (e.g., internet addiction) or risk factors based on symptomology required for disordered use.33–35 There appeared to be a spectrum of labelling from less severe (internet misuse, excessive internet use) to clinically significant and more severe behavioural addictions (i.e., internet addiction, IGD); however, usage and interpretation of diagnostic criteria varied considerably throughout the literature and cut offs were diverse and debated. Additionally, certain assessment items were open to individual interpretation. For example, it was common for sources to define addiction based on a concept surrounding the digital media usage exceeding the individual’s intended use. As has long been the case in developmental psychology and developmental psychopathology, there is an ongoing need to differentiate typical (or normal) behavioural and phenotypic variation from atypical (or abnormal) presentations and impairment. The utilisation of instruments that are sensitive to variation both within and between diagnostic categories will be essential.

    Regarding the measurement tools used to assess digital media usage, the majority of tools were quantitative and universal.33 36–39 As mentioned above, these measurement tools predominantly targeted frequency-based aspects of usage.40–42 Despite the prevailing uses of digital media being social connection and entertainment, there was a paucity of tools specifically developed and validated to assess social media usage, communication, e-books and (perhaps less surprisingly) virtual reality.43–45 With the increasing popularity of these digital media activities, the assessment and investigation of these forms of usage must be more strongly developed. Furthermore, while numerous measurement tools were cross-culturally and linguistically validated, a relative dearth of demographic considerations in the literature surrounding race, ethnicity, socioeconomic status, and gender, prompts some concern, as well.38 46 47 Given the replicated finding of children and youth far exceeding the guidelines for daily digital media usage,48 49 psychometric developments may also benefit from the development of norms surrounding regular and problematic usage. Additionally, the lack of specificity regarding the device type could also complicate measurement and conceptualisation if not sufficiently understood and considered.

    The widespread utilisation of self-reported surveys was not surprising. While this method is accessible, cost-effective and simple, it opens assessments to many well-known biases such as social desirability, recall bias, and other validity concerns (e.g., people simply being unaware of how much media is used personally or by children, or reports of amount of screen time being systematically linked to other criterion variables). Standardised self-report procedures and norms may help offset this problem. However, it is likely that the greatest advances will involve developments in data capture, including automated data collection from devices or another software solutions such as computer vision, ecological momentary assessment, wearables, or a hybrid of these technologies. Very few studies utilised automated statistics,43 50–52 though there is a slow and steady uptake in the development of these assessment tools.29 30 Challenges to their widespread adoption include data storage and privacy concerns—issues not faced in the same manner by big technology companies. Increased employment of this methodology could increase reliability. One study used ecological momentary assessment to evaluate digital media usage.53 However, further advancements in this domain are warranted, particularly in the development of convenient tools that are less cumbersome to the user.

    Limitations

    Some strengths of the present study were: (1) a novel approach, focusing on source methodology for data extraction with a specific emphasis on tools for measuring digital media use; (2) the inclusion of sources that were predominantly low risk; (3) the inclusion of measurement tools that were largely reliable and valid, (4) the use of a robust coding system in the study review and data extraction stages, and (5) the importance of objectives, that is, scoping the literature around measurement for digital media usage. This scoping review also had some limitations. First, due to the constantly evolving nature of digital media, sources published prior to March 2014 were excluded from the study. While this exclusion is thought to have minimal impact on the scoping review, since the focus was on a modern conceptualisation of digital media usage, researchers interested in earlier digital media use may need to consult additional resources. Second, data extraction and coding were inevitably delayed by the COVID-19 pandemic. Third, a large portion of the studies included employed potentially biased recruitment techniques. Lastly, this scoping review is obviously limited by the available literature. Given the rapidly evolving technological landscape, there will be an ongoing need for scientists and clinicians to stay abreast of measurement development, especially as technology changes. Thus, it is recommended that a similar scoping review exercise be conducted every few years for the foreseeable future.

    Conclusions

    Despite burgeoning programmes of research in laboratories across the world, the concept of digital media use in young people still warrants further explication and clarification. Many meritorious assessment tools have been created to assess constructs pertaining digital media overuse, though there remain important areas that are overlooked, oversimplified or understudied. Future research would clearly benefit from moving beyond “screen time”, allowing exploration on the different types of usage across devices, platforms and contexts, for better or for worse. Integrating theoretical frameworks from elsewhere in developmental science is essential, including moving beyond the use of screen time as a relevant variable, to considering how children grow up in a multilevel ecology that includes a digital level of analysis, among others. The modern technological landscape is ripe with challenges surrounding measurement, which are only compounded by challenges in developmental science, generally. At the same time, measurement solutions developed in this domain will likely propagate across the medical, psychological and social sciences. It is the hope of the authors that this scoping review represents an interim “taking stock” of a relatively young discipline that has already accomplished much, while being mindful of the significant work ahead. More specifically, these findings may help inform further research and the creation of a consensus based, psychometrically robust, digital media toolkit that is simultaneously comprehensive and feasible for researchers and clinicians, alike.

    Data availability statement

    Data are available on reasonable request. All data relevant to the study are included in the article or uploaded as online supplemental information. Relevant data are included as online supplemental information. Extended data available by request.

    Ethics statements

    Ethics approval

    This project did not involve living human or animal participants, or human or animal biological materials, and therefore did not require ethics review and approval by a research ethics board.

    Acknowledgments

    We would like to acknowledge the efforts of all the Research Assistants in the Whole Family Lab at the University of Waterloo (Waterloo, Canada) for their assistance in data collection. We would also like to thank Children & Screens: Institute of Digital Media and Child Development for supporting this project and the Media Impact Screening Toolkit (MIST) Workgroup.

    References

    Supplementary materials

    Footnotes

    • SSM and LC are joint senior authors.

    • Collaborators The MIST Working Group: Daphne Bavelier, University of Geneva, Geneva, Switzerland; Florence Breslin, Laureate Institute for Brain Research, Tulsa, USA; Joanne Broder, Saint Joseph’s University, Philadelphia, USA; Zsolt Demetrovics, Eötvös Loránd University, Budapest, Hungary; John Hutton, Cincinnati Children’s Hospital, Cincinnati, USA; Jessica Mendoza, University of Alabama, Tuscaloosa, USA; Jaysree Roberts, NYC Health + Hospitals, Kings County, New York, USA; Thomas Robinson, Stanford University, Stanford, USA; Cris Rowan, Zone'in Programs Inc., Sechelt, Canada; Oren Shefet, Suny Old Westbury, Old Westbury, New York; Tim Smith, Birkbeck University of London, Birkbeck, London; Rachel Waxman, NYC Health + Hospitals/ Kings County, New York, USA; Paul Weigle AACAP/Natchaug Hospital, Mansfield, USA.

    • Contributors DTB obtained funding, conceptualised the research, oversaw data collection and analyses, and edited the manuscript. SSMay conceptualised the research, conducted data collection and analyses, drafted and edited the manuscript. LC conducted data collection and analyses, drafted and edited the manuscript. PH-DP obtained funding, conceptualised the research, and edited the manuscript. DC, TA, LH, KD-H, JAE, AGF, SMad, GP, H-JR, DT, SU, JS, RN, HP conceptualised the research and edited the manuscript. The MIST Working Group conceptualised the research and edited the manuscript. All authors were involved in the decision to submit the manuscript for publication and approved the final manuscript.

    • Funding Funding for this research project was provided by Children and Screens: Institute of Digital Media and Child Development (grant/award number: not applicable). The project was conceptualized, stewarded, and funded by Children and Screens as part of an effort to develop a Media Impact Screening Toolkit for clinicians and researchers; as such, Dr. PHDP (fourth author) represents the organization as an active member of the article authorship team.

    • Competing interests None declared.

    • Provenance and peer review Not commissioned; externally peer reviewed.

    • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.