This description was provided by Professor Lloyd Johnston at the University of Michigan.

The data presented here for the United States come from a long-term series of annual national surveys that are part of the ‘Monitoring the future’ project (Lloyd D. Johnston, principal investigator; Jerald G. Bachman, Patrick M. O’Malley, John E. Schulenberg and Richard A. Miech, coinvestigators). This research series, in its 40th year in 2015, is funded under a series of investigator-initiated competing research grants from the US National Institute on Drug Abuse and conducted at the Institute for Social Research of the University of Michigan. The findings and description presented here were provided by Professor Johnston.

Surveys on nationally representative samples of 12th graders have been carried out each year since 1975. Beginning in 1991, surveys on nationally representative samples of 8thand 10th-grade students have also been conducted annually. In all some 1 500 000 students have been surveyed over the life of the study. Follow-up surveys of each 12th-grade class have been conducted since 1977, yielding annual national samples of college students and adults, and eventually people through age 55 who were secondary-school graduates. In the United States about 85-90 % of each graduating birth cohort graduates from secondary school by completing 12th grade. Considerably more complete 10th grade, and about 97 % of the teenagers born in 1999 and in 2000 were enrolled in school at the time of the data collection.

Population

In the United States, the required age for school attendance is 16. For this report, only the data for students who were in 10th grade in the spring of 2015 are presented. Nearly all of the students in this grade are 15 or 16 years of age, thus approximating the age of the ESPAD participants. The proportion of 10th graders who were 15 years old was 42 %, 16 years old 53 % and 17 years old 5 %.

Sample and representativeness

In 2015, the 10th graders included in the study comprised 15 015 students in 120 high schools nationwide (102 public and 18 private schools), selected to provide an accurate representative cross-section of all 10th-grade students in the coterminous United States (48 states, i.e. all except Alaska and Hawaii).

A multistage random sampling procedure is used for securing the nationwide sample of 10th-grade students each year. Stage 1 is the selection of particular geographic areas across the country, Stage 2 involves the selection (with probability proportionate to size) of one or more schools in each area containing a 10th grade and Stage 3 is the selection of students within each school. Within each school, up to 350 10th graders may be included. In schools with a small number of 10th graders, the usual procedure is to include all of them in the data collection. In larger schools, a subset of 10th graders is selected either by randomly sampling entire classrooms or by some other random method judged to be unbiased. The resulting data are reweighted to correct for any differences in selection probability that may have occurred in the sampling. (See Johnston et al., 2016 and Miech et al., 2015 for details on sampling and field procedures, as well as for more detailed results.)

Field procedures

Parental notification with the opportunity for them to decline their child’s participation is required prior to the administration of the survey; some individual schools require active written parental consent. Approximately 3 weeks before the administration, letters and brochures are sent to the students’ parents to inform them of the study and request permission for their children to participate.

About 10 days before the administration, the students are given flyers explaining the study, telling them that their participation is voluntary and that the project has a special government grant of confidentiality that allows the investigators to protect all information gathered in the study. The actual questionnaire administration is conducted by representatives of the local Institute for Social Research and their assistants, following standardised procedures detailed in a project instruction manual. The questionnaires are administered in classrooms during a normal class period whenever possible; however, circumstances in some schools require the use of larger group administrations. Teachers introduce the interviewer and remain in the room to ensure an orderly atmosphere. They are asked not to move around the room lest students be concerned that they might see their answers. Most respondents can finish within a normal 45-minute class period; for those who cannot, an effort is made to provide a few minutes of additional time. The datacollection period was from mid February through mid June of 2015. The annual surveys are always conducted at the same time of year to avoid any unintended artefacts. The questionnaires turned in by the 10th-grade respondents to the university-employed interviewer are anonymous. They contain no names, addresses, phone numbers or other individually identifying information.

Questionnaire and data processing

The ‘Monitoring the future’ questionnaires are designed to be optically scanned after they have been completed. All questions have a pre-specified set of answers; with no writein answers. A great many of the questions in the ‘Monitoring the future’ questionnaires are equivalent to questions in the core segment of the ESPAD survey, but a number of the ESPAD questions are not included in ‘Monitoring the future’. Similarly, many of the ‘Monitoring the future’ questions are not included in ESPAD.

Because many questions are needed to cover all of the topic areas in the study, much of the questionnaire content intended for 10th graders is divided into four different questionnaire forms that are distributed to participants in an ordered sequence that ensures four virtually identical random subsamples. About one third of each questionnaire form consists of key variables that are common to all forms. All demographic variables, and nearly all of the drug use variables included in this report, are contained in this common set of measures. Questions on other topics tend to be contained in fewer forms, and are thus usually based on one third or two thirds as many cases (i.e. approximately 5 000 to 10 000 cases).

After the administration of the surveys in the classrooms, the interviewers forward boxes of the completed questionnaires to a contractor, where they are optically scanned. The data are then sent to study staff where they are checked for accuracy, processed and cleaned using SAS statistical and data-management software. Processing and cleaning steps include consistency and wild-code checking, assignment of missing data codes, addition of weightings and school information, creation of permanent recoded variables and creation of a clean data disc for analysis. Approximately 5 % of the questionnaires were discarded in the cleaning process.

Weightings are added to the data to improve the accuracy of estimates by correction for unequal probabilities of selection that arise at any point in the multistage sampling procedure.

School and student cooperation

Schools are invited to participate in the study for a 2-year period. With very few exceptions, each school from the original sample participating in the first year has agreed to participate for the second. For each school refusal, a similar school (in terms of size, geographic area, community size, etc.) is recruited as a replacement. In 2015, 44 % of the sampling slots were filled with original selection schools and 49 % with replacement schools. Overall some 93 % of the sampling ‘slots’ were filled, including the replacement schools.

In 2015, completed questionnaires were obtained from 87 % of all sampled students in the 10th-grade sample of schools. The single most important reason that students were missed was absence from class at the time of data collection. The proportion of explicit refusals amounts to less than 1 % of students. Student comprehension is judged to be very high, based on pilot tests, questionnaire-completion rates and low rates of internal inconsistencies.

Reliability and validity

Even taking into account the clustered nature of these school-based samples, it was found that the annual drugprevalence estimates, based on the total sample of 10th graders each year, have confidence intervals that average about ± 1 %. Confidence intervals on lifetime prevalence for 10th-graders vary from ± 0.2 % to ± 2.4 %, depending on the drug. Confidence intervals for last-12-month, last-30-day and daily use are smaller. This means that, had it been possible to invite all schools and all 10th-grade students in the 48 coterminous states to participate, the results from such a massive survey should be within about 1 percentage point of the present findings for most drugs at least 95 times out of 100. This was considered to be a high level of sampling accuracy, permitting the detection of fairly small changes from one year to the next.

The question always arises of whether sensitive behaviours like drug use are honestly reported. Like most studies dealing with sensitive behaviours, there is no direct, totally objective validation of the present measures; however, the considerable amount of inferential evidence that exists from the study of 12th graders strongly suggests that the self-report questions produce largely valid data (Johnston and O’Malley, 1985; Johnston, O’Malley, Bachman and Schulenberg, 2003; O’Malley, Bachman and Johnston, 1983. These citations are available at http://www.monitoringthefuture.org).

First, using a three-wave panel design, it was established that the various measures of self-reported drug use have a high degree of reliability, a necessary condition of validity. In essence, this means that respondents were highly consistent in their self-reported behaviours over a 3-4-year interval. Second, a high degree of consistency was found among logically related measures of use within the same questionnaire administration — evidence for convergent validity. Third, the proportion of seniors (i.e. 12th graders) reporting some illicit drug use by 12th grade has reached two thirds of all 12th-grade respondents in peak years and as high as 80 % in some follow-up years, which constitutes prima facie evidence that the extent of under-reporting must be very limited. Fourth, the seniors’ reports of use by their unnamed friends, about whom they would presumably have less reason to distort, have been highly consistent with self-reported use in the aggregate in terms of both prevalence and trends in prevalence. Fifth, it was found that self-reported drug use relates in consistent and expected ways to a number of other attitudes, behaviours, beliefs and social situations; in other words, there is strong evidence of construct validity. Sixth, the missing-data rates for the self-reported use questions are only very slightly higher than for the preceding non-sensitive questions, in spite of the explicit instruction to respondents to leave blank those drug use questions they felt they could not answer honestly. And seventh, the great majority of respondents, when asked, say they would answer such questions honestly if they were users.

This is not to argue that self-reported measures of drug use are valid in all cases. The researchers tried to create a situation and set of procedures in which students feel that their confidentiality will be protected. They also tried to present a convincing case as to why such research is needed. The evidence suggests that a high level of validity has been obtained. Nevertheless, insofar as there exists any remaining reporting bias, the estimates are believed to be in the direction of under-reporting. Thus, the estimates are believed to be lower than their true values, even for the obtained samples, but not substantially so.

Methodological considerations

There is no reason to believe that the sample is biased. However, it should be noted that the population consists of students in grade 10. Most of them are 15-16 years old, which means that a large majority were born in 2000, but not all of them, which yields a very modest degree of noncomparability with the regular ESPAD countries.

Another difference, compared with most but not all other countries, was that the students in the United States knew about the study in advance. It seems reasonable to think that this fact has not created any major problems in comparison with other countries since the reliability and validity are rather high and since students in the United States are accustomed to participating in different kinds of surveys. An advantage from the ESPAD perspective is that the most important drug use questions are the same in the United States as in Europe. As mentioned, the reliability and validity seem to be high. It is assumed, however, that any remaining bias is in the direction of under-reporting.

With the abovementioned in mind, there is reason to believe that the results from the United States are rather comparable with data from the regular ESPAD countries.

Further information http://www.monitoringthefuture.org